[PPL-devel] integer versus rational solutions

Albert Cohen Albert.Cohen at inria.fr
Mon Jul 13 12:07:15 CEST 2009


Tobias Grosser wrote:
> On Fri, 2009-07-10 at 23:57 +0200, Albert Cohen wrote:
>> Hi Tobias.
>>
>> Tobias Grosser wrote:
>>> Stop. Do I understand this correctly. You do not want to check
>>> dependencies after strip mining? I am very concerned about that, as one
>>> of the benefits of working on the polytop model I thought is, that we
>>> can schedule transformations in an arbitrary order. So it is required to
>>> to reanalyze strip mined loops. This is also necessary for the
>>> dependency checking we use for autopar.
>> As presented in Austin in November, it is much preferable to follow the 
>> transformation composition approach of Girbal and Vasilache that does 
>> not require intermediate steps of dependence analysis and legality checking.
> 
> You are referring to "Facilitating the search for compositions of
> program transformations"?

Yes. The 2d+1 encoding and the additional invariants/rules allow to 
compose and generalize classical transformations to a polyhedral 
representation that may have diverged arbitrarily far from the original 
syntax tree.

 > (...)

>> Konrad is working on Step 2.1 (enhancing strip-mine to update the 
>> dependence info). Then, if you agree, he will help restructure the API 
>> such a that dependence computation and dependence checking are ALWAYS 
>> called explicitely, rather than implicitely in the transformation 
>> primitives.
> 
> Sure. I would like us to be even able to call them after any single
> transformation.
> 
> A loop like would be nice for the optimizations.
> 
> while (select_and_apply_transform (scop)) {
> 	check_dependencies();
> }

Yes. Skipping some checks require stronger technology to guarantee that 
transformation steps will be legal or will eventually be legal/checked, 
or even corrected/completed via enabling transformations (see 
Vasilache's PACT'07 paper).

As a concrete, more original example: together with Razya and Konrad, we 
are thinking about enabling parloops and interchange with an automatic 
privatization "corrective step". We would ignore memory-based 
dependences when looking for interchange combinations that improve both 
parloops and autovect (using an extended cost model), then check which 
ones were eventually violated in the selected transformation sequence, 
and eventually privatize only those scalars/arrays that need to be 
privatized to effectively remove those memory-based dependences.

> I know this will be expensive and we want to disable this in the
> production compiler. But I want us to be able to use this loop for
> debugging.

Sure.

> I totally agree that it is useful to avoid polyhedral computations, as
> they tend to be expensive. However I still want to be able to do CHECK
> and ANALYSIS at every single step, as it will show us assumptions that
> we took to optimize compile time.
> If we take any assumptions (that might be necessary), I would like them
> to not be hidden by avoiding dependency ANALYSIS, but I want our
> (nightly) test to fail.
 >
> This will allow a analysis/discussion of the assumptions we took. Maybe
> we can avoid them, if not we know we have to document them. Every
> assumption has to be written down at a well known place so that future
> developers will not be surprised.

OK. But running the ANALYSIS itself after transformation makes little 
sense to me, even in debugging mode. Can you explain why?

>>> Another thing I do not understand is that you are often referring to
>>> "large constants". I thought complexity problems mainly appear because
>>> of a high number of dimensions, but not because of high values in a
>>> (in)equality. Can you explain this to me?
>> As mentioned several times, big constants tend to break the "structure" 
>> of ILP problems or other polyhedral operations, leading to combinatorial 
>> explosions. This is well known even in the case of simple algorithms 
>> like Fourier Motzkin. They also increase the risk of memory overflow.
> 
> OK. I definitely believe you, but as this seems to become one of the
> important points for Graphite performance I would like to understand
> this well. Maybe we can also write a Wiki page to explain which
> constructs should be avoided. We already learned NNC_Polyhedron are bad,
> but what else is dangerous.
> 
> Here I am still not getting the problem. I looked again into Fourier
> Motzkin and on the first look I could not find anything in it's
> complexity depending on the values of the dimensions of the
> (in)equalities. Any pointer or example that might help me understanding
> this problem would be really helpful.

FM blows away when it generates too many inequalities. To avoid this, 
people generally favor eliminations that generate 0 or 1 inequality, first.

>> Eventually, the bigger the constant the higher the loss of information 
>> when relaxing to rational polyhedra.
> 
> OK. However when we do not put the additional restrictions the
> information would never have been available. Or are we loosing other
> information because of putting additional restrictions?

Sorry, I missed that part. Can you re-explain?

>> All of this is very informal/intuitive of course. There are some/few 
>> theoretical results about this, but Roberto and other experts are better 
>> placed than myself to answer it!
> 
> @Roberto:
> Can you give us some hints, what is really expensive and what should we
> avoid when constructing polyhedrons in the PPL?
> 
> Tobi




More information about the PPL-devel mailing list