- 11 Jun, 2013 15 commits
-
-
Matthias Braun authored
-
Matthias Braun authored
-
Matthias Braun authored
-
Matthias Braun authored
- You can now construct the lookup table separately and perform the lowering selectively on irgs (instead of the whole program at once) while reusing the lookup table. - Simplified API a bit. - Remove i_mapper_RuntimeCall. All users are simpler by doing the transformations directly instead of filling in runtime_rt structures...
-
Matthias Braun authored
Use the same style as the SPARC 64bit lowering, making the additional handle_intrinsics step unnecessary now.
-
Matthias Braun authored
-
Matthias Braun authored
-
Matthias Braun authored
-
Matthias Braun authored
-
Matthias Braun authored
before_abi() callback was removed, you can simply put that code at the end of the init() callback. custom_abi flag was removed, if a backend needs the be_abi_introduce() stuff (formerly custom_abi == false) then it can call it at the end of the init() callback.
-
Matthias Braun authored
-
Matthias Braun authored
The code was only working on ia32 and contains several ia32 specific bits, so keep it in the ia32 backend for now. At least the amd64 backend will require a different implementation of PIC.
-
Matthias Braun authored
-
Christoph Mallon authored
The result of gs_matrix_gauss_seidel() never is negative.
-
Christoph Mallon authored
get_tarval_uint64() checks the same.
-
- 10 Jun, 2013 1 commit
-
-
Manuel Mohr authored
-
- 07 Jun, 2013 3 commits
-
-
yb9976 authored
If the solution of our eigenvalue problem results in unreasonable values (like infinity), we now fall back to a loop-weight-based execution frequency, or, if this is not sufficient, to a constant execution frequency.
-
Matthias Braun authored
-
Matthias Braun authored
-
- 06 Jun, 2013 11 commits
-
-
Matthias Braun authored
-
Matthias Braun authored
-
Matthias Braun authored
-
Matthias Braun authored
-
Matthias Braun authored
-
Matthias Braun authored
-
Matthias Braun authored
-
Matthias Braun authored
-
Matthias Braun authored
The backends which support rotl now match for or(shl,shr) patterns.
-
Matthias Braun authored
-
Matthias Braun authored
This makes it consistent with the rest of libFirm and avoids cases where the non-inlined version of the functions is used inside the header by accident.
-
- 26 May, 2013 4 commits
-
-
Christoph Mallon authored
This ensures that only kept blocks get the extra weight for the keep edge.
-
Christoph Mallon authored
-
Christoph Mallon authored
-
Christoph Mallon authored
-
- 18 May, 2013 2 commits
- 17 May, 2013 4 commits
-
-
yb9976 authored
We should never see an Id node and Call nodes should never have mode_X.
-
yb9976 authored
-
Andreas Fried authored
After killing a store (e.g. if it is dead), its Store value can be killed, if no other node uses it. We can use the code in reduce_adr_usage for that, which also works for non-pointers. This commit adds a new function kill_and_reduce_usage, which works for Load and Store nodes. It kills the node and all Loads on which only the killed node depends.
-
Andreas Fried authored
optimize_load can introduce dead stores, which are not removed, because they have already been visited (the irg is walked from Start to End). This commit adds a second irg walk for dead store elimination after the other optimizations have run, and removes dead store elimination from the first walk.
-