@@ -119,101 +119,17 @@ preserving the code semantics.
119
119
120
120
## Test you must.
121
121
122
- Whenever applicable, merge requests must come with tests
123
- exercising the affected features: regression tests for bug fixes,
124
- and correctness tests for new features (including corner cases and
125
- failure cases). For regression tests, testing other aspects of the
126
- feature (in particular, related edge cases) that are not currently
127
- covered is a good way to catch other instances of bugs -- this did
128
- happen several times in the past. Warnings and errors should also
129
- be tested.
130
-
131
- Tests go in the sub-directories of ` testsuite/tests ` . Running
132
- ` make all ` in ` testsuite/ ` runs all tests (this takes
133
- a few minutes), and you can use ` make one DIR=tests/foo ` to run
134
- the tests of a specific sub-directory. There are many kind of tests
135
- already, so the easiest way to start is to extend or copy an
136
- existing test.
137
-
138
- In general, running a test produces one (or several) ` .result ` file,
139
- that are compared to one (or several) ` .reference ` file present in the
140
- repository; the test succeeds if they are identical. If your patch
141
- breaks a test, diffing the ` .result ` and ` .reference ` file is a way to
142
- see what went wrong. Some reasonable compiler changes affect the
143
- compiler output in way that make those outputs differ (for example
144
- slight modifications of warning or error messages may break all tests
145
- checking warnings). If you are positive that the new ` .result ` file
146
- is correct (and that the change in behavior does not endanger
147
- backward compatibility), you can replace the old ` .reference ` file
148
- with it. Finally, when adding new tests, do not forget to include your
149
- ` .reference ` files (but not ` .result ` ) in the versioned repository.
150
-
151
- Testing is also a way to make sure reviewers see working
152
- (and failing) examples of the feature you fix, extend or
153
- introduce, rather than just an abstract description of it.
122
+ Whenever applicable, merge requests must come with tests exercising
123
+ the affected features: regression tests for bug fixes, and correctness
124
+ tests for new features (including corner cases and
125
+ failure cases). Warnings and errors should also be tested.
154
126
127
+ See [ testsuite/HACKING.adoc] ( testsuite/HACKING.adoc ) for details on
128
+ how to write tests and run the testsuite.
155
129
156
- ### Run tests before sending a PR
157
-
158
- You should run all the tests before creating the merge request or
159
- pushing new commits (even if Travis will also do it for you): `make
160
- tests` (this takes a few minutes).
161
-
162
- Unfortunately some of the ` lib-threads ` test are non-deterministic
163
- and fail once in a while (it's hard to test these well). If they
164
- consistently break after your change, you should investigate, but if
165
- you only see a transient failure once and your change has no reason
166
- to affect threading, it's probably not your fault.
167
-
168
-
169
- ### Benchmarking
170
-
171
- If your contribution can impact the performance of the code generated
172
- by the native compiler, you can use the infrastructure that the
173
- flambda team put together to benchmark the compiler to assess the
174
- consequences of your contribution. It has two main accessible parts:
175
-
176
- - The website that hosts benchmarks results, at
177
- [ http://bench.flambda.ocamlpro.com/ ] ( http://bench.flambda.ocamlpro.com/ ) .
178
- It exposes two ways to compare compilers: the first, under the header
179
- ` Plot a given benchmark ` , allows to select a benchmark and
180
- see graphs plotting the evolution of the performance of the different
181
- compilers over time. The second, under ` Compare two runs ` , allows
182
- to get an overview of the differences between a reference compiler
183
- (selected using the ` ref ` button) and a compiler under test (using
184
- the ` tst ` button). Clicking on the ` Compare ` button at the bottom
185
- right of the page will create a new page containing summaries and
186
- raw data comparing the selected runs.
187
-
188
- - The git repository containing the data about which benchmarks
189
- to run, on which compilers, at [ https://github.com/OCamlPro/ocamlbench-repo ] (
190
- https://github.com/OCamlPro/ocamlbench-repo ). This needs to be a valid
191
- opam 2.0 repository, and contains the benchmarks as normal packages
192
- and the compilers as versions of the package ` ocaml-variants ` .
193
- To add a compiler to the list, you must have a publicly accessible
194
- version of your branch (if you're making a pull request again the
195
- compiler, you should have a branch on github that was used to make
196
- the pull request, that you can use for this purpose).
197
- Then, you should make a pull request against ` ocamlbench-repo `
198
- that adds a repertory in the ` packages/ocaml-variants ` sub-folder
199
- which contains a single ` opam ` file. The contents of the file
200
- should be inspired from the other files already present, with
201
- the main points of interest being the ` url ` field, which should
202
- point to your branch, the ` build ` field that should be adapted
203
- if the features that you want to benchmark depend on configure-time
204
- options, and the ` setenv ` field that can be used to pass compiler
205
- options via the ` OCAMLPARAM ` environment variable.
206
- The ` trunk+flambda+opt ` compiler, for instance, both uses a
207
- ` configure ` option and sets the ` OCAMLPARAM ` variable.
208
- The folder you add has to be named ` ocaml-variants.%VERSION%+%DESCR% ` ,
209
- where ` %VERSION% ` is the version that will be used by opam to
210
- check compatibility with the opam packages that are needed for the
211
- benchmarks, and ` %DESCR% ` should be a short description of the feature
212
- you're benchmarking (if you're making a pull request against ` ocaml ` ,
213
- you can use the PR number in the description, e.g. ` +gpr0000 ` ).
214
- Once your pull request is merged, it will likely take a few hours
215
- until the benchmark server picks up the new definition and again
216
- up to a few hours before the results are available on the results page.
130
+ Adding tests is also a way to make sure reviewers see working
131
+ (and failing) examples of the feature you fix, extend or
132
+ introduce, rather than just an abstract description of it.
217
133
218
134
219
135
## Description of the proposed change
@@ -223,9 +139,14 @@ up to a few hours before the results are available on the results page.
223
139
The description of the merge request must contain a precise
224
140
explanation of the proposed change.
225
141
226
- Before going in the implementation details, you should include
227
- a summary of the change, and a high-level description of the design
228
- of the proposed change, with example use-cases.
142
+ Before going into the implementation details, you should include
143
+ a summary of the change, a justification of why it is beneficial, and
144
+ a high-level description of the design of the proposed change with
145
+ example use cases.
146
+
147
+ Changes have a cost, they require review work and may accidentally
148
+ introduce new bugs. Communicating as clearly as you can the benefits
149
+ of your PR will reassure and motivate potential reviewers.
229
150
230
151
### In the patches
231
152
0 commit comments