<chapter id="testing">
    <title>Writing Conformance tests</title>

    <para>
      Note: This part of the documentation is still very much a work in
      progress and is in no way complete.
    </para>

    <sect1 id="testing-intro">
      <title>Introduction</title>
      <para>
        With more The Windows API follows no standard, it is itself a defacto
        standard, and deviations from that standard, even small ones, often
        cause applications to crash or misbehave in some way. Furthermore
        a conformance test suite is the most accurate (if not necessarily
        the most complete) form of API documentation and can be used to
        supplement the Windows API documentation.
      </para>
      <para>
        Writing a conformance test suite for more than 10000 APIs is no small
        undertaking. Fortunately it can prove very useful to the development
        of Wine way before it is complete.
        <itemizedlist>
          <listitem>
            <para>
              The conformance test suite must run on Windows. This is
              necessary to provide a reasonable way to verify its accuracy.
              Furthermore the tests must pass successfully on all Windows
              platforms (tests not relevant to a given platform should be
              skipped).
            </para>
            <para>
              A consequence of this is that the test suite will provide a
              great way to detect variations in the API between different
              Windows versions. For instance, this can provide insights
              into the differences between the, often undocumented, Win9x and
              NT Windows families.
            </para>
            <para>
              However, one must remember that the goal of Wine is to run
              Windows applications on Linux, not to be a clone of any specific
              Windows version. So such variations must only be tested for when
              relevant to that goal.
            </para>
          </listitem>
          <listitem>
            <para>
              Writing conformance tests is also an easy way to discover
              bugs in Wine. Of course, before fixing the bugs discovered in
              this way, one must first make sure that the new tests do pass
              successfully on at least one Windows 9x and one Windows NT
              version.
            </para>
            <para>
              Bugs discovered this way should also be easier to fix. Unlike
              some mysterious application crashes, when a conformance test
              fails, the expected behavior and APIs tested for are known thus
              greatly simplifying the diagnosis.
            </para>
          </listitem>
          <listitem>
            <para>
              To detect regressions. Simply running the test suite regularly
              in Wine turns it into a great tool to detect regressions.
              When a test fails, one immediately knows what was the expected
              behavior and which APIs are involved. Thus regressions caught
              this way should be detected earlier, because it is easy to run
              all tests on a regular basis, and easier to fix because of the
              reduced diagnosis work.
            </para>
          </listitem>
          <listitem>
            <para>
              Tests written in advance of the Wine development (possibly even
              by non Wine developpers) can also simplify the work of the
              futur implementer by making it easier for him to check the
              correctness of his code.
            </para>
          </listitem>
          <listitem>
            <para>
              Conformance tests will also come in handy when testing Wine on
              new (or not as widely used) architectures such as FreeBSD,
              Solaris x86 or even non-x86 systems. Even when the port does
              not involve any significant change in the thread management,
              exception handling or other low-level aspects of Wine, new
              architectures can expose subtle bugs that can be hard to
              diagnose when debugging regular (complex) applications.
            </para>
          </listitem>
        </itemizedlist>
      </para>
    </sect1>


    <sect1 id="testing-what">
      <title>What to test for?</title>
      <para>
        The first thing to test for is the documented behavior of APIs
        and such as CreateFile. For instance one can create a file using a
        long pathname, check that the behavior is correct when the file
        already exists, try to open the file using the corresponding short
        pathname, convert the filename to Unicode and try to open it using
        CreateFileW, and all other things which are documented and that
        applications rely on.
      </para>
      <para>
        While the testing framework is not specifically geared towards this
        type of tests, it is also possible to test the behavior of Windows
        messages. To do so, create a window, preferably a hidden one so that
        it does not steal the focus when running the tests, and send messages
        to that window or to controls in that window. Then, in the message
        procedure, check that you receive the expected messages and with the
        correct parameters.
      </para>
      <para>
        For instance you could create an edit control and use WM_SETTEXT to
        set its contents, possibly check length restrictions, and verify the
        results using WM_GETTEXT. Similarly one could create a listbox and
        check the effect of LB_DELETESTRING on the list's number of items,
        selected items list, highlighted item, etc.
      </para>
      <para>
        However, undocumented behavior should not be tested for unless there
        is an application that relies on this behavior, and in that case the
        test should mention that application, or unless one can strongly
        expect applications to rely on this behavior, typically APIs that
        return the required buffer size when the buffer pointer is NULL.
      </para>
    </sect1>


    <sect1 id="testing-wine">
      <title>Running the tests in Wine</title>
      <para>
        The simplest way to run the tests in Wine is to type 'make test' in
        the Wine sources top level directory. This will run all the Wine
        conformance tests.
      </para>
      <para>
        The tests for a specific Wine library are located in a 'tests'
        directory in that library's directory. Each test is contained in a
        file (e.g. <filename>dlls/kernel/tests/thread.c</>). Each
        file itself contains many checks concerning one or more related APIs.
      </para>
      <para>
        So to run all the tests related to a given Wine library, go to the
        corresponding 'tests' directory and type 'make test'. This will
        compile the tests, run them, and create an '<replaceable>xxx</>.ok'
        file for each test that passes successfully. And if you only want to
        run the tests contained in the <filename>thread.c</> file of the
        kernel library, you would do:
<screen>
<prompt>$ </>cd dlls/kernel/tests
<prompt>$ </>make thread.ok
</screen>
      </para>
      <para>
        Note that if the test has already been run and is up to date (i.e. if
        neither the kernel library nor the <filename>thread.c</> file has
        changed since the <filename>thread.ok</> file was created), then make
        will say so. To force the test to be re-run, delete the
        <filename>thread.ok</> file, and run the make command again.
      </para>
      <para>
        You can also run tests manually using a command similar to the
        following:
<screen>
<prompt>$ </>../../../tools/runtest -q -M kernel32.dll -p kernel32_test.exe.so thread.c
<prompt>$ </>../../../tools/runtest -p kernel32_test.exe.so thread.c
thread.c: 86 tests executed, 5 marked as todo, 0 failures.
</screen>
        The '-P wine' options defines the platform that is currently being
        tested. Remove the '-q' option if you want the testing framework
        to report statistics about the number of successful and failed tests.
        Run <command>runtest -h</> for more details.
      </para>
    </sect1>


    <sect1 id="testing-windows">
      <title>Building and running the tests on Windows</title>
      <sect2>
        <title>Using pre-compiled binaries</title>
        <para>
          Unfortunately there are no pre-compiled binaries yet. However if
          send an email to the Wine development list you can probably get
          someone to send them to you, and maybe motivate some kind soul to
          put in place a mechanism for publishing such binaries on a regular
          basis.
        </para>
      </sect2>
      <sect2>
        <title>With Visual C++</title>
        <itemizedlist>
          <listitem><para>
            get the Wine sources
          </para></listitem>
          <listitem><para>
            Run msvcmaker to generate Visual C++ project files for the tests.
            'msvcmaker' is a perl script so you may be able to run it on
            Windows.
<screen>
<prompt>$ </>./tools/winapi/msvcmaker --no-wine
</screen>
          </para></listitem>
          <listitem><para>
            If the previous steps were done on your Linux development
            machine, make the Wine sources accessible to the Windows machine
            on which you are going to compile them. Typically you would do
            this using Samba but copying them altogether would work too.
          </para></listitem>
          <listitem><para>
            On the Windows machine, open the <filename>winetest.dsw</>
            workspace. This will load each test's project. For each test there
            are two configurations: one compiles the test with the Wine
            headers, and the other uses the Visual C++ headers. Some tests
            will compile fine with the former, but most will require the
            latter.
          </para></listitem>
          <listitem><para>
            Open the <menuchoice><guimenu>Build</> <guimenu>Batch
            build...</></> menu and select the tests and build configurations
            you want to build. Then click on <guibutton>Build</>.
          </para></listitem>
          <listitem><para>
            To run a specific test from Visual C++, go to
            <menuchoice><guimenu>Project</> <guimenu>Settings...</></>. There
            select that test's project and build configuration and go to the
            <guilabel>Debug</> tab. There type the name of the specific test
            to run (e.g. 'thread') in the <guilabel>Program arguments</>
            field. Validate your change by clicking on <guibutton>Ok</> and
            start the test by clicking the red exclamation mark (or hitting
            'F5' or any other usual method).
          </para></listitem>
          <listitem><para>
            You can also run the tests from the command line. You will find
            them in either <filename>Output\Win32_Wine_Headers</> or
            <filename>Output\Win32_MSVC_Headers</> depending on the build
            method. So to run the kernel 'path' tests you would do:
<screen>
<prompt>C:\&gt;</>cd dlls\kernel\tests\Output\Win32_MSVC_Headers
<prompt>C:\dlls\kernel\tests\Output\Win32_MSVC_Headers&gt;</>kernel32_test thread
</screen>
          </para></listitem>
        </itemizedlist>
      </sect2>
      <sect2>
        <title>With MinGW</title>
        <para>
          This needs to be documented. The best may be to ask on the Wine
          development mailing list and update this documentation with the
          result of your inquiry.
        </para>
      </sect2>
      <sect2>
        <title>Cross compiling with MinGW on Linux</title>
        <para>
          Here is how to generate Windows executables for the tests straight
          from the comfort of Linux.
        </para>
        <itemizedlist>
          <listitem><para>
            First you need to get the MinGW cross-compiler. On Debian all
            you need to do is type <command>apt-get install mingw32</>.
          </para></listitem>
          <listitem><para>
            If you had already run <command>configure</>, then delete
            <filename>config.cache</> and re-run <command>configure</>.
            You can then run <command>make crosstest</>. To sum up:
<screen>
<prompt>$ </><userinput>rm config.cache</>
<prompt>$ </><userinput>./configure</>
<prompt>$ </><userinput>make crosstest</>
</screen>
          </para></listitem>
          <listitem><para>
            If you get an error when compiling <filename>winsock.h</> then
            you probably need to apply the following patch:
            <ulink url="http://www.winehq.com/hypermail/wine-patches/2002/12/0157.html">http://www.winehq.com/hypermail/wine-patches/2002/12/0157.html</>
          </para></listitem>
        </itemizedlist>
      </sect2>
    </sect1>


    <sect1 id="testing-test">
      <title>Inside a test</title>

      <para>
        When writing new checks you can either modify an existing test file or
        add a new one. If your tests are related to the tests performed by an
        existing file, then add them to that file. Otherwise create a new .c
        file in the tests directory and add that file to the
        <varname>CTESTS</> variable in <filename>Makefile.in</>.
      </para>
      <para>
        A new test file will look something like the following:
<screen>
#include &lt;wine/test.h&gt;
#include &lt;winbase.h&gt;

/* Maybe auxiliary functions and definitions here */

START_TEST(paths)
{
   /* Write your checks there or put them in functions you will call from
    * there
    */
}
</screen>
      </para>
      <para>
        The test's entry point is the START_TEST section. This is where
        execution will start. You can put all your tests in that section but
        it may be better to split related checks in functions you will call
        from the START_TEST section. The parameter to START_TEST must match
        the name of the C file. So in the above example the C file would be
        called <filename>paths.c</>.
      </para>
      <para>
        Tests should start by including the <filename>wine/test.h</> header.
        This header will provide you access to all the testing framework
        functions. You can then include the windows header you need, but make
        sure to not include any Unix or Wine specific header: tests must
        compile on Windows.
      </para>
<!-- FIXME: Can we include windows.h now? We should be able to but currently __WINE__ is defined thus making it impossible. -->
<!-- FIXME: Add recommendations about what to print in case of a failure: be informative -->
      <para>
        You can use <function>trace</> to print informational messages. Note
        that these messages will only be printed if 'runtest -v' is being used.
<screen>
  trace("testing GlobalAddAtomA");
  trace("foo=%d",foo);
</screen>
      </para>
      <para>
        Then just call functions and use <function>ok</> to make sure that
        they behaved as expected:
<screen>
  ATOM atom = GlobalAddAtomA( "foobar" );
  ok( GlobalFindAtomA( "foobar" ) == atom, "could not find atom foobar" );
  ok( GlobalFindAtomA( "FOOBAR" ) == atom, "could not find atom FOOBAR" );
</screen>
        The first parameter of <function>ok</> is an expression which must
        evaluate to true if the test was successful. The next parameter is a
        printf-compatible format string which is displayed in case the test
        failed, and the following optional parameters depend on the format
        string.
      </para>
      <para>
        It is important to display an informative message when a test fails:
        a good error message will help the Wine developper identify exactly
        what went wrong without having to add too many other printfs. For
        instance it may be useful to print the error code if relevant, or the
        expected value and effective value. In that respect, for some tests
        you may want to define a macro such as the following:
<screen>
#define eq(received, expected, label, type) \
        ok((received) == (expected), "%s: got " type " instead of " type, (label),(received),(expected))

...

eq( b, curr_val, "SPI_{GET,SET}BEEP", "%d" );
</screen>
       </para>
       <para>
         Note
       </para>
    </sect1>


    <sect1 id="testing-platforms">
      <title>Handling platform issues</title>
      <para>
        Some checks may be written before they pass successfully in Wine.
        Without some mechanism, such checks would potentially generate
        hundred of known failures for months each time the tests are being run.
        This would make it hard to detect new failures caused by a regression.
        or to detect that a patch fixed a long standing issue.
      </para>
      <para>
        Thus the Wine testing framework has the concept of platforms and
        groups of checks can be declared as expected to fail on some of them.
        In the most common case, one would declare a group of tests as
        expected to fail in Wine. To do so, use the following construct:
<screen>
todo_wine {
    SetLastError( 0xdeadbeef );
    ok( GlobalAddAtomA(0) == 0 && GetLastError() == 0xdeadbeef, "failed to add atom 0" );
}
</screen>
        On Windows the above check would be performed normally, but on Wine it
        would be expected to fail, and not cause the failure of the whole
        test. However. If that check were to succeed in Wine, it would
        cause the test to fail, thus making it easy to detect when something
        has changed that fixes a bug. Also note that todo checks are accounted
        separately from regular checks so that the testing statistics remain
        meaningful. Finally, note that todo sections can be nested so that if
        a test only fails on the cygwin and reactos platforms, one would
        write:
<screen>
todo("cygwin") {
    todo("reactos") {
        ...
    }
}
</screen>
        <!-- FIXME: Would we really have platforms such as reactos, cygwin, freebsd & co? -->
        But specific platforms should not be nested inside a todo_wine section
        since that would be redundant.
      </para>
      <para>
        When writing tests you will also encounter differences between Windows
        9x and Windows NT platforms. Such differences should be treated
        differently from the platform issues mentioned above. In particular
        you should remember that the goal of Wine is not to be a clone of any
        specific Windows version but to run Windows applications on Unix.
      </para>
      <para>
        So, if an API returns a different error code on Windows 9x and
        Windows NT, your check should just verify that Wine returns one or
        the other:
<screen>
ok ( GetLastError() == WIN9X_ERROR || GetLastError() == NT_ERROR, ...);
</screen>
      </para>
      <para>
        If an API is only present on some Windows platforms, then use
        LoadLibrary and GetProcAddress to check if it is implemented and
        invoke it. Remember, tests must run on all Windows platforms.
        Similarly, conformance tests should nor try to correlate the Windows
        version returned by GetVersion with whether given APIs are
        implemented or not. Again, the goal of Wine is to run Windows
        applications (which do not do such checks), and not be a clone of a
        specific Windows version.
      </para>
      <para>
        FIXME: What about checks that cause the process to crash due to a bug?
      </para>
    </sect1>


<!-- FIXME: Strategies for testing threads, testing network stuff,
 file handling, eq macro... -->

  </chapter>

<!-- Keep this comment at the end of the file
Local variables:
mode: sgml
sgml-parent-document:("wine-doc.sgml" "set" "book" "part" "chapter" "")
End:
-->