renggli at gmail.com
Tue Jan 10 21:54:52 CET 2012
>>> Adding an indice of difficulty to correct the rule:
>>> - Level of automation for the correction of the rule (automatic, semi-automatic, …).
>> Actually this information is there: rules correct automatically if the
>> rule is a subclass of RBTransformationRule.
> Ok good to know. Are they linked to a refactoring?
The RBTransformationRules are specified as a rewrite rule, so in the
end they are an undo-able refactoring change.
>>> - Scope of the rule (block, method, class …).
>> Not sure what you mean, but you can scope the rules to any
>> RBEnvironment (see
> This is not in that direction but more.
> This rule may impact method, classes…
> One idea is can we propose a way to assess the cost of fixing a violated rules.
> Not easy since some of the rules are trivially fixed but may report a lot of violated places
> while a rule can be difficult to fix and only get to one place.
> Still we would like to give some hints to the maintainer.
I think the cost is hard to assess for a tool. It is basically zero if
a refactoring can be instantiated to fix it, but for many code critics
rules there is no such refactoring. A critics rule is merely a
suggestion of how the code could possibly be improved.
>>> Other costs could be
>>> - cost of applying (may be the rule requires to compute something else, or if a rule has 50000 hits, it generates an extra cost to analyze the results)
>>> - cost of NOT applying the rule (if it could detect a bug and we don' t apply it ...)
>> There is something like this in RBLintRule>>#severity. The severity is
>> shown as an icon in OB.
> Ok we should use that.
Jenkins also uses it. All Java tools have a severity flag, so I had to
add it. Have a look at source of FireBug, it even goes further (don't
>>> Adding a manifest to manage/mark falses positive:
>>> - one manifest (a class) by package.
>>> - Manifest: exclusion of class/method of the package for rules or for all the rules.
>>> - Exclusion of rules for the package.
>> See http://www.lukas-renggli.ch/blog/ignoring-lint-rules.
>> Also many years ago I wrote some infrastructure to run critics rules
>> as part of the tests, what essentially served as a runnable manifest.
> what was it?
> How did you manage false positives. What we want is to add a package Manifesto to store false positives.
> Because we should not use pragma for that because we will have tons of pragmas everywhere.
Yes, the pragmas are not really ideal, because most of the time you
don't want to see that kind of annotations.
What is available in the repository mentioned is an abstract test case
with a test for each critics rule. Projects would subclass that test
and override a method returning a default environment to run code
critics on. Then you had a little DSL where you could enable/disable
certain rules. And you could modify the default environment for each
rule individually, that is to add/remove classes, class hierarchies,
methods, method prefixes, method patterns, etc. It was basically a
manifesto (it could also contain project specific rules) in the form
of a runnable TestCase.
We used this system successfully for the Cmsbox. After careful
configuration we had a system with zero code critics failures. And the
build system immediately barked if a new failure was introduced; or an
expected one was removed.
More information about the Pharo-project