- 11 Jun, 2013 3 commits
-
-
Matthias Braun authored
-
Christoph Mallon authored
The result of gs_matrix_gauss_seidel() never is negative.
-
Christoph Mallon authored
get_tarval_uint64() checks the same.
-
- 10 Jun, 2013 1 commit
-
-
Manuel Mohr authored
-
- 07 Jun, 2013 3 commits
-
-
yb9976 authored
If the solution of our eigenvalue problem results in unreasonable values (like infinity), we now fall back to a loop-weight-based execution frequency, or, if this is not sufficient, to a constant execution frequency.
-
Matthias Braun authored
-
Matthias Braun authored
-
- 06 Jun, 2013 11 commits
-
-
Matthias Braun authored
-
Matthias Braun authored
-
Matthias Braun authored
-
Matthias Braun authored
-
Matthias Braun authored
-
Matthias Braun authored
-
Matthias Braun authored
-
Matthias Braun authored
-
Matthias Braun authored
The backends which support rotl now match for or(shl,shr) patterns.
-
Matthias Braun authored
-
Matthias Braun authored
This makes it consistent with the rest of libFirm and avoids cases where the non-inlined version of the functions is used inside the header by accident.
-
- 26 May, 2013 4 commits
-
-
Christoph Mallon authored
This ensures that only kept blocks get the extra weight for the keep edge.
-
Christoph Mallon authored
-
Christoph Mallon authored
-
Christoph Mallon authored
-
- 18 May, 2013 2 commits
- 17 May, 2013 4 commits
-
-
yb9976 authored
We should never see an Id node and Call nodes should never have mode_X.
-
yb9976 authored
-
Andreas Fried authored
After killing a store (e.g. if it is dead), its Store value can be killed, if no other node uses it. We can use the code in reduce_adr_usage for that, which also works for non-pointers. This commit adds a new function kill_and_reduce_usage, which works for Load and Store nodes. It kills the node and all Loads on which only the killed node depends.
-
Andreas Fried authored
optimize_load can introduce dead stores, which are not removed, because they have already been visited (the irg is walked from Start to End). This commit adds a second irg walk for dead store elimination after the other optimizations have run, and removes dead store elimination from the first walk.
-
- 15 May, 2013 7 commits
-
-
Andreas Fried authored
Previously, if the load and store mode were equal, the optimization aborted immediately. This is not necessary, we must only abort if the load and store modes' arithmetic are not equal. This causes a bug in type-based alias analysis to appear: When a CopyB is lowered to Loads and Stores, the modes of these are always Iu instead of the actual modes of the struct's members. The test-case "opt/fehler199.c" will fail as a result. Therefore, type-based alias analysis is switched off in cparser.
-
Andreas Fried authored
When inlining a function, cparser produces a Sync after the parameters have been copied, which has an edge to the Call's memory input. If there is at least one struct argument, this edge is not needed, and will not be generated now. Also, if there is only one parameter, the Sync itself is not generated.
-
Matthias Braun authored
Nobody uses it currently, and it is a burden for everyone writing a new pass.
-
Matthias Braun authored
-
Matthias Braun authored
-
Matthias Braun authored
! binds tighter than <
-
Matthias Braun authored
Most aspects of the fp_model flag would need to be enforced by constructing the firm graph in a certain way. Setting it as a flag won't help. Apart from that nearly nothing of the stuff was implemented anyway.
-
- 07 May, 2013 3 commits
-
-
Matthias Braun authored
They are considered low level operations now which just allocate/free a block of memory on the stack. There is no highlevel typeinformation attached anymore or support for heap allocation. Frontends/liboo should provide their custom highlevel nodes if they need these features.
-
Matthias Braun authored
-
Matthias Braun authored
-
- 06 May, 2013 2 commits
-
-
Matthias Braun authored
-
Matthias Braun authored
-