[PPL-devel] integer versus rational solutions

Tobias Grosser grosser at fim.uni-passau.de
Sun Jul 12 01:01:19 CEST 2009


On Fri, 2009-07-10 at 23:57 +0200, Albert Cohen wrote:
> Hi Tobias.
> 
> Tobias Grosser wrote:
> > Stop. Do I understand this correctly. You do not want to check
> > dependencies after strip mining? I am very concerned about that, as one
> > of the benefits of working on the polytop model I thought is, that we
> > can schedule transformations in an arbitrary order. So it is required to
> > to reanalyze strip mined loops. This is also necessary for the
> > dependency checking we use for autopar.
> 
> As presented in Austin in November, it is much preferable to follow the 
> transformation composition approach of Girbal and Vasilache that does 
> not require intermediate steps of dependence analysis and legality checking.

You are referring to "Facilitating the search for compositions of
program transformations"?

> In this framework, the typical/ideal flow of transformation construction 
> is the following:
> 
> 1. Compute dependence relations for all pairs and store it persistently.
> 
> 2. Select a primitive transformation and apply it to the polyhedral 
> representation.
>    2.1. If the transformation involved strip-mining, update the 
> dependence information such that all the polyhedra associated with the 
> strip-mined statements get one extra "time column" filled only with 
> zeroes; you may check that this is correct and strictly equivalent to 
> recomputing dependence information in the case of the isolated 
> application of strip-mining (not composed with other transformations) 
> and of an exact integral polyhedral computation.
> 
> 3. If the transformation computed so far is not good enough, compose 
> with another one by iterating to step 2. For example, start with 
> strip-mining, then interchange, to achieve tiling. Of course, much more 
> complex schemes should be crafted later on.
> 
> 4. When you are sick of composing more and more primitive 
> transformations into a big complex one, check the legality of the result 
> by comparing the transformed schedule with the stored dependence 
> relations (updated through the strip-mining steps).
> 
> Of course, Step 2 should be a little careful to select only 
> transformations that will not cause trouble later in step 4. A simple 
> way is to add a Step 2.2 that checks for legality after transforming the 
> schedule, and backtracks if it fails, attempting another transformation 
> selection.

Thanks for explaining.


> Konrad is working on Step 2.1 (enhancing strip-mine to update the 
> dependence info). Then, if you agree, he will help restructure the API 
> such a that dependence computation and dependence checking are ALWAYS 
> called explicitely, rather than implicitely in the transformation 
> primitives.

Sure. I would like us to be even able to call them after any single
transformation.

A loop like would be nice for the optimizations.

while (select_and_apply_transform (scop)) {
	check_dependencies();
}

I know this will be expensive and we want to disable this in the
production compiler. But I want us to be able to use this loop for
debugging.

> 
> If you do not agree or fully understand the motivations for this scheme, 
> I'm happy to discuss it further, but I'll be barely available next week 
> due to an overload of paper deadlines and the ACACES summer school of 
> the HiPEAC network (where I would gladly invite you to apply, for next 
> year :-)).
> 
> > It would be nice to read a post about what assumptions/solutions you
> > want to take to make sure this complexity problems will not appear
> > again. Do they limit our ability to run optimizations after we strip
> > mined a loop?
> 
> No, as explained above, we only want to avoid running dependence 
> ANALYSIS after applying transformations. Running dependence CHECK for 
> legality should be fine, although for compilation time reasons it is 
> better to avoid it whenever possible (other concrete algorithms exist to 
> correct transformations, complement transformations, or constrain 
> transformations to span only legal points, but we can look at this later).

I totally agree that it is useful to avoid polyhedral computations, as
they tend to be expensive. However I still want to be able to do CHECK
and ANALYSIS at every single step, as it will show us assumptions that
we took to optimize compile time.
If we take any assumptions (that might be necessary), I would like them
to not be hidden by avoiding dependency ANALYSIS, but I want our
(nightly) test to fail.
This will allow a analysis/discussion of the assumptions we took. Maybe
we can avoid them, if not we know we have to document them. Every
assumption has to be written down at a well known place so that future
developers will not be surprised.


> > Another thing I do not understand is that you are often referring to
> > "large constants". I thought complexity problems mainly appear because
> > of a high number of dimensions, but not because of high values in a
> > (in)equality. Can you explain this to me?
> 
> As mentioned several times, big constants tend to break the "structure" 
> of ILP problems or other polyhedral operations, leading to combinatorial 
> explosions. This is well known even in the case of simple algorithms 
> like Fourier Motzkin. They also increase the risk of memory overflow.

OK. I definitely believe you, but as this seems to become one of the
important points for Graphite performance I would like to understand
this well. Maybe we can also write a Wiki page to explain which
constructs should be avoided. We already learned NNC_Polyhedron are bad,
but what else is dangerous.

Here I am still not getting the problem. I looked again into Fourier
Motzkin and on the first look I could not find anything in it's
complexity depending on the values of the dimensions of the
(in)equalities. Any pointer or example that might help me understanding
this problem would be really helpful.

> Eventually, the bigger the constant the higher the loss of information 
> when relaxing to rational polyhedra.

OK. However when we do not put the additional restrictions the
information would never have been available. Or are we loosing other
information because of putting additional restrictions?

> All of this is very informal/intuitive of course. There are some/few 
> theoretical results about this, but Roberto and other experts are better 
> placed than myself to answer it!

@Roberto:
Can you give us some hints, what is really expensive and what should we
avoid when constructing polyhedrons in the PPL?

Tobi




More information about the PPL-devel mailing list