Skip to content
This repository has been archived by the owner on Sep 2, 2018. It is now read-only.

Implement machine code generation #2

Closed
dylanmckay opened this issue Nov 26, 2014 · 7 comments
Closed

Implement machine code generation #2

dylanmckay opened this issue Nov 26, 2014 · 7 comments

Comments

@dylanmckay
Copy link
Member

Currently only assembly code output is supported. IT=t should be possible to generate ELF and plain binaries (without a header, with the entry point being at the beginning of the file).

@4ntoine
Copy link

4ntoine commented Nov 26, 2014

great part of the work is done! can the code from https://github.com/sushihangover/llvm-avr be used to finish elf/plain binaries generation?

@dylanmckay
Copy link
Member Author

@4ntoine This repository started from the code at https://github.com/sushihangover/llvm-avr which originated from http://sourceforge.net/projects/avr-llvm/. Neither projects support machine code generation.

However, I am working on this bug as of current. It shouldn't be too hard as the raw instruction formats are already described. I have a branch isn't pushed yet which implements most of the code for ELF outputting; I will merge and push changes once the code is completed (which should be either today or tomorrow, barring any unexpected bugs).

@dylanmckay
Copy link
Member Author

I have to get better at not hitting that button.

@4ntoine
Copy link

4ntoine commented Nov 26, 2014

Dylan, you're doing very good (and very needed) job! Please keep working.

As for me i was suggested to read AVR book (http://www.amazon.com/Programming-Customizing-Microcontroller-Dhananjay-Gadre/dp/007134666X) and i'm reading it. I'd like to join the project and do what i can but i'm afraid at the moment i don't have necessary knowledge on both AVR and LLVM/Clang (though i used Clang C API for some my projects actively). For now i'm going to check it's working and get knowledge as much as i can. I think i will be able to do something useful after few weeks..

@dylanmckay
Copy link
Member Author

Any help would be great, as a working AVR backend for LLVM would be very useful. Feel free to ask me any questions. Good luck!

@4ntoine
Copy link

4ntoine commented Nov 26, 2014

can we have email list for the project or just your email to contact directly?

@dylanmckay
Copy link
Member Author

dylanmckay commented Nov 26, 2014

A mailing list seems a bit over the top for two people. My email: redacted

dylanmckay pushed a commit that referenced this issue Feb 4, 2015
type erased interface and a single analysis pass rather than an
extremely complex analysis group.

The end result is that the TTI analysis can contain a type erased
implementation that supports the polymorphic TTI interface. We can build
one from a target-specific implementation or from a dummy one in the IR.

I've also factored all of the code into "mix-in"-able base classes,
including CRTP base classes to facilitate calling back up to the most
specialized form when delegating horizontally across the surface. These
aren't as clean as I would like and I'm planning to work on cleaning
some of this up, but I wanted to start by putting into the right form.

There are a number of reasons for this change, and this particular
design. The first and foremost reason is that an analysis group is
complete overkill, and the chaining delegation strategy was so opaque,
confusing, and high overhead that TTI was suffering greatly for it.
Several of the TTI functions had failed to be implemented in all places
because of the chaining-based delegation making there be no checking of
this. A few other functions were implemented with incorrect delegation.
The message to me was very clear working on this -- the delegation and
analysis group structure was too confusing to be useful here.

The other reason of course is that this is *much* more natural fit for
the new pass manager. This will lay the ground work for a type-erased
per-function info object that can look up the correct subtarget and even
cache it.

Yet another benefit is that this will significantly simplify the
interaction of the pass managers and the TargetMachine. See the future
work below.

The downside of this change is that it is very, very verbose. I'm going
to work to improve that, but it is somewhat an implementation necessity
in C++ to do type erasure. =/ I discussed this design really extensively
with Eric and Hal prior to going down this path, and afterward showed
them the result. No one was really thrilled with it, but there doesn't
seem to be a substantially better alternative. Using a base class and
virtual method dispatch would make the code much shorter, but as
discussed in the update to the programmer's manual and elsewhere,
a polymorphic interface feels like the more principled approach even if
this is perhaps the least compelling example of it. ;]

Ultimately, there is still a lot more to be done here, but this was the
huge chunk that I couldn't really split things out of because this was
the interface change to TTI. I've tried to minimize all the other parts
of this. The follow up work should include at least:

1) Improving the TargetMachine interface by having it directly return
   a TTI object. Because we have a non-pass object with value semantics
   and an internal type erasure mechanism, we can narrow the interface
   of the TargetMachine to *just* do what we need: build and return
   a TTI object that we can then insert into the pass pipeline.
2) Make the TTI object be fully specialized for a particular function.
   This will include splitting off a minimal form of it which is
   sufficient for the inliner and the old pass manager.
3) Add a new pass manager analysis which produces TTI objects from the
   target machine for each function. This may actually be done as part
   of #2 in order to use the new analysis to implement #2.
4) Work on narrowing the API between TTI and the targets so that it is
   easier to understand and less verbose to type erase.
5) Work on narrowing the API between TTI and its clients so that it is
   easier to understand and less verbose to forward.
6) Try to improve the CRTP-based delegation. I feel like this code is
   just a bit messy and exacerbating the complexity of implementing
   the TTI in each target.

Many thanks to Eric and Hal for their help here. I ended up blocked on
this somewhat more abruptly than I expected, and so I appreciate getting
it sorted out very quickly.

Differential Revision: http://reviews.llvm.org/D7293

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227669 91177308-0d34-0410-b5e6-96231b3b80d8
dylanmckay pushed a commit that referenced this issue Jun 4, 2015
in-register LUT technique.

Summary:
A description of this technique can be found here:
http://wm.ite.pl/articles/sse-popcount.html

The core of the idea is to use an in-register lookup table and the
PSHUFB instruction to compute the population count for the low and high
nibbles of each byte, and then to use horizontal sums to aggregate these
into vector population counts with wider element types.

On x86 there is an instruction that will directly compute the horizontal
sum for the low 8 and high 8 bytes, giving vNi64 popcount very easily.
Various tricks are used to get vNi32 and vNi16 from the vNi8 that the
LUT computes.

The base implemantion of this, and most of the work, was done by Bruno
in a follow up to D6531. See Bruno's detailed post there for lots of
timing information about these changes.

I have extended Bruno's patch in the following ways:

0) I committed the new tests with baseline sequences so this shows
   a diff, and regenerated the tests using the update scripts.

1) Bruno had noticed and mentioned in IRC a redundant mask that
   I removed.

2) I introduced a particular optimization for the i32 vector cases where
   we use PSHL + PSADBW to compute the the low i32 popcounts, and PSHUFD
   + PSADBW to compute doubled high i32 popcounts. This takes advantage
   of the fact that to line up the high i32 popcounts we have to shift
   them anyways, and we can shift them by one fewer bit to effectively
   divide the count by two. While the PSHUFD based horizontal add is no
   faster, it doesn't require registers or load traffic the way a mask
   would, and provides more ILP as it happens on different ports with
   high throughput.

3) I did some code cleanups throughout to simplify the implementation
   logic.

4) I refactored it to continue to use the parallel bitmath lowering when
   SSSE3 is not available to preserve the performance of that version on
   SSE2 targets where it is still much better than scalarizing as we'll
   still do a bitmath implementation of popcount even in scalar code
   there.

With #1 and #2 above, I analyzed the result in IACA for sandybridge,
ivybridge, and haswell. In every case I measured, the throughput is the
same or better using the LUT lowering, even v2i64 and v4i64, and even
compared with using the native popcnt instruction! The latency of the
LUT lowering is often higher than the latency of the scalarized popcnt
instruction sequence, but I think those latency measurements are deeply
misleading. Keeping the operation fully in the vector unit and having
many chances for increased throughput seems much more likely to win.

With this, we can lower every integer vector popcount implementation
using the LUT strategy if we have SSSE3 or better (and thus have
PSHUFB). I've updated the operation lowering to reflect this. This also
fixes an issue where we were scalarizing horribly some AVX lowerings.

Finally, there are some remaining cleanups. There is duplication between
the two techniques in how they perform the horizontal sum once the byte
population count is computed. I'm going to factor and merge those two in
a separate follow-up commit.

Differential Revision: http://reviews.llvm.org/D10084

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238636 91177308-0d34-0410-b5e6-96231b3b80d8
dylanmckay pushed a commit that referenced this issue Oct 23, 2015
Convert two halfword loads into a single 32-bit word load with bitfield extract
instructions. For example :
  ldrh w0, [x2]
  ldrh w1, [x2, #2]
becomes
  ldr w0, [x2]
  ubfx w1, w0, #16, #16
  and  w0, w0, #ffff

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@250719 91177308-0d34-0410-b5e6-96231b3b80d8
dylanmckay pushed a commit that referenced this issue Oct 23, 2015
Convert two halfword loads into a single 32-bit word load with bitfield extract
instructions. For example :
  ldrh w0, [x2]
  ldrh w1, [x2, #2]
becomes
  ldr w0, [x2]
  ubfx w1, w0, #16, #16
  and  w0, w0, #ffff

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@250719 91177308-0d34-0410-b5e6-96231b3b80d8
dylanmckay pushed a commit that referenced this issue Feb 3, 2016
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@259602 91177308-0d34-0410-b5e6-96231b3b80d8
dylanmckay pushed a commit that referenced this issue Mar 3, 2016
Summary:
For instance, compiling the below results in a panic:

```
llc: ../lib/CodeGen/InlineSpiller.cpp:1140: bool (anonymous namespace)::InlineSpiller::foldMemoryOperand(ArrayRef<std::pair<MachineInstr *, unsigned int> >, llvm::MachineInstr *): Assertion `MO->isDead() && "Cannot fold physreg def"' failed.
#0 0x00007f50fbcf353e llvm::sys::PrintStackTrace(llvm::raw_ostream&) /home/h/3rd/llvm/build/../lib/Support/Unix/Signals.inc:321:15
#1 0x00007f50fbcf3929 PrintStackTraceSignalHandler(void*) /home/h/3rd/llvm/build/../lib/Support/Unix/Signals.inc:380:1
#2 0x00007f50fbcf22a3 llvm::sys::RunSignalHandlers() /home/h/3rd/llvm/build/../lib/Support/Signals.cpp:45:5
#3 0x00007f50fbcf3bb4 SignalHandler(int) /home/h/3rd/llvm/build/../lib/Support/Unix/Signals.inc:210:1
#4 0x00007f50fa87a180 (/lib/x86_64-linux-gnu/libc.so.6+0x35180)
#5 0x00007f50fa87a107 gsignal (/lib/x86_64-linux-gnu/libc.so.6+0x35107)
#6 0x00007f50fa87b4e8 abort (/lib/x86_64-linux-gnu/libc.so.6+0x364e8)
#7 0x00007f50fa873226 (/lib/x86_64-linux-gnu/libc.so.6+0x2e226)
#8 0x00007f50fa8732d2 (/lib/x86_64-linux-gnu/libc.so.6+0x2e2d2)
#9 0x00007f50fddd9287 (anonymous namespace)::InlineSpiller::foldMemoryOperand(llvm::ArrayRef<std::pair<llvm::MachineInstr*, unsigned int> >, llvm::MachineInstr*) /home/h/3rd/llvm/build/../lib/CodeGen/InlineSpiller.cpp:1141:21
#10 0x00007f50fddd9ee9 (anonymous namespace)::InlineSpiller::spillAroundUses(unsigned int) /home/h/3rd/llvm/build/../lib/CodeGen/InlineSpiller.cpp:1286:9
#11 0x00007f50fddd388b (anonymous namespace)::InlineSpiller::spillAll() /home/h/3rd/llvm/build/../lib/CodeGen/InlineSpiller.cpp:1338:21
#12 0x00007f50fddd221d (anonymous namespace)::InlineSpiller::spill(llvm::LiveRangeEdit&) /home/h/3rd/llvm/build/../lib/CodeGen/InlineSpiller.cpp:1391:3
#13 0x00007f50fdfd921b (anonymous namespace)::RAGreedy::selectOrSplitImpl(llvm::LiveInterval&, llvm::SmallVectorImpl<unsigned int>&, llvm::SmallSet<unsigned int, 16u, std::less<unsigned int> >&, unsigned int) /home/h/3rd/llvm/build/../lib/CodeGen/RegAllocGreedy.cpp:2555:5
#14 0x00007f50fdfd647b (anonymous namespace)::RAGreedy::selectOrSplit(llvm::LiveInterval&, llvm::SmallVectorImpl<unsigned int>&) /home/h/3rd/llvm/build/../lib/CodeGen/RegAllocGreedy.cpp:2221:12
#15 0x00007f50fdfc89f9 llvm::RegAllocBase::allocatePhysRegs() /home/h/3rd/llvm/build/../lib/CodeGen/RegAllocBase.cpp:110:14
#16 0x00007f50fdfd6337 (anonymous namespace)::RAGreedy::runOnMachineFunction(llvm::MachineFunction&) /home/h/3rd/llvm/build/../lib/CodeGen/RegAllocGreedy.cpp:2611:3
#17 0x00007f50fded33ee llvm::MachineFunctionPass::runOnFunction(llvm::Function&) /home/h/3rd/llvm/build/../lib/CodeGen/MachineFunctionPass.cpp:43:3
#18 0x00007f50fd6cdc6f llvm::FPPassManager::runOnFunction(llvm::Function&) /home/h/3rd/llvm/build/../lib/IR/LegacyPassManager.cpp:1550:23
#19 0x00007f50fd6cdf85 llvm::FPPassManager::runOnModule(llvm::Module&) /home/h/3rd/llvm/build/../lib/IR/LegacyPassManager.cpp:1571:16
#20 0x00007f50fd6ce71a (anonymous namespace)::MPPassManager::runOnModule(llvm::Module&) /home/h/3rd/llvm/build/../lib/IR/LegacyPassManager.cpp:1627:23
#21 0x00007f50fd6ce246 llvm::legacy::PassManagerImpl::run(llvm::Module&) /home/h/3rd/llvm/build/../lib/IR/LegacyPassManager.cpp:1730:16
#22 0x00007f50fd6cec31 llvm::legacy::PassManager::run(llvm::Module&) /home/h/3rd/llvm/build/../lib/IR/LegacyPassManager.cpp:1761:3
#23 0x0000000000415bdc compileModule(char**, llvm::LLVMContext&) /home/h/3rd/llvm/build/../tools/llc/llc.cpp:405:5
#24 0x0000000000414571 main /home/h/3rd/llvm/build/../tools/llc/llc.cpp:211:13
#25 0x00007f50fa866b45 __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b45)
#26 0x0000000000414296 _start (/home/h/3rd/llvm/build/bin/llc+0x414296)
Stack dump:
0.	Program arguments: ./bin/llc -mtriple msp430 loadstore.ll 
1.	Running pass 'Function Pass Manager' on module 'loadstore.ll'.
2.	Running pass 'Greedy Register Allocator' on function '@inc'
```

Original IR:

```llvm
%struct.VeryLarge = type { i8, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32 }

; Function Attrs: norecurse nounwind
define void @inc(%struct.VeryLarge* noalias nocapture sret %agg.result, %struct.VeryLarge* byval align 1 %s) #0 {
entry:
  %p0 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 0
  %0 = load i8, i8* %p0, align 1, !tbaa !1
  %p1 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 1
  %1 = load i32, i32* %p1, align 1, !tbaa !6
  %p2 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 2
  %2 = load i32, i32* %p2, align 1, !tbaa !7
  %p3 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 3
  %3 = load i32, i32* %p3, align 1, !tbaa !8
  %p4 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 4
  %4 = load i32, i32* %p4, align 1, !tbaa !9
  %p5 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 5
  %5 = load i32, i32* %p5, align 1, !tbaa !10
  %p6 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 6
  %6 = load i32, i32* %p6, align 1, !tbaa !11
  %p7 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 7
  %7 = load i32, i32* %p7, align 1, !tbaa !12
  %p8 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 8
  %8 = load i32, i32* %p8, align 1, !tbaa !13
  %p9 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 9
  %9 = load i32, i32* %p9, align 1, !tbaa !14
  %p10 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 10
  %10 = load i32, i32* %p10, align 1, !tbaa !15
  %p11 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 11
  %11 = load i32, i32* %p11, align 1, !tbaa !16
  %p12 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 12
  %12 = load i32, i32* %p12, align 1, !tbaa !17
  %p13 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 13
  %13 = load i32, i32* %p13, align 1, !tbaa !18
  %p14 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 14
  %14 = load i32, i32* %p14, align 1, !tbaa !19
  %p15 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 15
  %15 = load i32, i32* %p15, align 1, !tbaa !20
  %p16 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 16
  %16 = load i32, i32* %p16, align 1, !tbaa !21
  %p17 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 17
  %17 = load i32, i32* %p17, align 1, !tbaa !22
  %p18 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 18
  %18 = load i32, i32* %p18, align 1, !tbaa !23
  %p19 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 19
  %19 = load i32, i32* %p19, align 1, !tbaa !24
  %p20 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 20
  %20 = load i32, i32* %p20, align 1, !tbaa !25
  %p21 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 21
  %21 = load i32, i32* %p21, align 1, !tbaa !26
  %p22 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 22
  %22 = load i32, i32* %p22, align 1, !tbaa !27
  %p23 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 23
  %23 = load i32, i32* %p23, align 1, !tbaa !28
  %p24 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 24
  %24 = load i32, i32* %p24, align 1, !tbaa !29
  %p25 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 25
  %25 = load i32, i32* %p25, align 1, !tbaa !30
  %p26 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 26
  %26 = load i32, i32* %p26, align 1, !tbaa !31
  %p27 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 27
  %27 = load i32, i32* %p27, align 1, !tbaa !32
  %p28 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 28
  %28 = load i32, i32* %p28, align 1, !tbaa !33
  %p29 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 29
  %29 = load i32, i32* %p29, align 1, !tbaa !34
  %p30 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 30
  %30 = load i32, i32* %p30, align 1, !tbaa !35
  %p31 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 31
  %31 = load i32, i32* %p31, align 1, !tbaa !36
  %p32 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 32
  %32 = load i32, i32* %p32, align 1, !tbaa !37
  %add = add i8 %0, 1
  store i8 %add, i8* %p0, align 1, !tbaa !1
  %add2 = add i32 %1, 2
  store i32 %add2, i32* %p1, align 1, !tbaa !6
  %add3 = add i32 %2, 3
  store i32 %add3, i32* %p2, align 1, !tbaa !7
  %add4 = add i32 %3, 4
  store i32 %add4, i32* %p3, align 1, !tbaa !8
  %add5 = add i32 %4, 5
  store i32 %add5, i32* %p4, align 1, !tbaa !9
  %add6 = add i32 %5, 6
  store i32 %add6, i32* %p5, align 1, !tbaa !10
  %add7 = add i32 %6, 7
  store i32 %add7, i32* %p6, align 1, !tbaa !11
  %add8 = add i32 %7, 8
  store i32 %add8, i32* %p7, align 1, !tbaa !12
  %add9 = add i32 %8, 9
  store i32 %add9, i32* %p8, align 1, !tbaa !13
  %add10 = add i32 %9, 10
  store i32 %add10, i32* %p9, align 1, !tbaa !14
  %add11 = add i32 %10, 11
  store i32 %add11, i32* %p10, align 1, !tbaa !15
  %add12 = add i32 %11, 12
  store i32 %add12, i32* %p11, align 1, !tbaa !16
  %add13 = add i32 %12, 13
  store i32 %add13, i32* %p12, align 1, !tbaa !17
  %add14 = add i32 %13, 14
  store i32 %add14, i32* %p13, align 1, !tbaa !18
  %add15 = add i32 %14, 15
  store i32 %add15, i32* %p14, align 1, !tbaa !19
  %add16 = add i32 %15, 16
  store i32 %add16, i32* %p15, align 1, !tbaa !20
  %add17 = add i32 %16, 17
  store i32 %add17, i32* %p16, align 1, !tbaa !21
  %add18 = add i32 %17, 18
  store i32 %add18, i32* %p17, align 1, !tbaa !22
  %add19 = add i32 %18, 19
  store i32 %add19, i32* %p18, align 1, !tbaa !23
  %add20 = add i32 %19, 20
  store i32 %add20, i32* %p19, align 1, !tbaa !24
  %add21 = add i32 %20, 21
  store i32 %add21, i32* %p20, align 1, !tbaa !25
  %add22 = add i32 %21, 22
  store i32 %add22, i32* %p21, align 1, !tbaa !26
  %add23 = add i32 %22, 23
  store i32 %add23, i32* %p22, align 1, !tbaa !27
  %add24 = add i32 %23, 24
  store i32 %add24, i32* %p23, align 1, !tbaa !28
  %add25 = add i32 %24, 25
  store i32 %add25, i32* %p24, align 1, !tbaa !29
  %add26 = add i32 %25, 26
  store i32 %add26, i32* %p25, align 1, !tbaa !30
  %add27 = add i32 %26, 27
  store i32 %add27, i32* %p26, align 1, !tbaa !31
  %add28 = add i32 %27, 28
  store i32 %add28, i32* %p27, align 1, !tbaa !32
  %add29 = add i32 %28, 29
  store i32 %add29, i32* %p28, align 1, !tbaa !33
  %add30 = add i32 %29, 30
  store i32 %add30, i32* %p29, align 1, !tbaa !34
  %add31 = add i32 %30, 31
  store i32 %add31, i32* %p30, align 1, !tbaa !35
  %add32 = add i32 %31, 32
  store i32 %add32, i32* %p31, align 1, !tbaa !36
  %add33 = add i32 %32, 33
  store i32 %add33, i32* %p32, align 1, !tbaa !37
  %33 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %agg.result, i32 0, i32 0
  call void @llvm.memcpy.p0i8.p0i8.i32(i8* %33, i8* %p0, i32 129, i32 1, i1 false), !tbaa.struct !38
  ret void
}

; Function Attrs: argmemonly nounwind
declare void @llvm.memcpy.p0i8.p0i8.i32(i8* nocapture, i8* nocapture readonly, i32, i32, i1) #1

attributes #0 = { norecurse nounwind "disable-tail-calls"="false" "less-precise-fpmad"="false" "no-frame-pointer-elim"="true" "no-frame-pointer-elim-non-leaf" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" }
attributes #1 = { argmemonly nounwind }

!llvm.ident = !{!0}

!0 = !{!"clang version 3.8.0 (git://github.com/llvm-mirror/clang 40ef2b7531472c41212c4719a9294aeb7bddebbc) (git://github.com/llvm-mirror/llvm c601eaf55606dfb9ad372b514b77aa00d1409be1)"}
!1 = !{!2, !3, i64 0}
!2 = !{!"", !3, i64 0, !5, i64 1, !5, i64 5, !5, i64 9, !5, i64 13, !5, i64 17, !5, i64 21, !5, i64 25, !5, i64 29, !5, i64 33, !5, i64 37, !5, i64 41, !5, i64 45, !5, i64 49, !5, i64 53, !5, i64 57, !5, i64 61, !5, i64 65, !5, i64 69, !5, i64 73, !5, i64 77, !5, i64 81, !5, i64 85, !5, i64 89, !5, i64 93, !5, i64 97, !5, i64 101, !5, i64 105, !5, i64 109, !5, i64 113, !5, i64 117, !5, i64 121, !5, i64 125}
!3 = !{!"omnipotent char", !4, i64 0}
!4 = !{!"Simple C/C++ TBAA"}
!5 = !{!"int", !3, i64 0}
!6 = !{!2, !5, i64 1}
!7 = !{!2, !5, i64 5}
!8 = !{!2, !5, i64 9}
!9 = !{!2, !5, i64 13}
!10 = !{!2, !5, i64 17}
!11 = !{!2, !5, i64 21}
!12 = !{!2, !5, i64 25}
!13 = !{!2, !5, i64 29}
!14 = !{!2, !5, i64 33}
!15 = !{!2, !5, i64 37}
!16 = !{!2, !5, i64 41}
!17 = !{!2, !5, i64 45}
!18 = !{!2, !5, i64 49}
!19 = !{!2, !5, i64 53}
!20 = !{!2, !5, i64 57}
!21 = !{!2, !5, i64 61}
!22 = !{!2, !5, i64 65}
!23 = !{!2, !5, i64 69}
!24 = !{!2, !5, i64 73}
!25 = !{!2, !5, i64 77}
!26 = !{!2, !5, i64 81}
!27 = !{!2, !5, i64 85}
!28 = !{!2, !5, i64 89}
!29 = !{!2, !5, i64 93}
!30 = !{!2, !5, i64 97}
!31 = !{!2, !5, i64 101}
!32 = !{!2, !5, i64 105}
!33 = !{!2, !5, i64 109}
!34 = !{!2, !5, i64 113}
!35 = !{!2, !5, i64 117}
!36 = !{!2, !5, i64 121}
!37 = !{!2, !5, i64 125}
!38 = !{i64 0, i64 1, !39, i64 1, i64 4, !40, i64 5, i64 4, !40, i64 9, i64 4, !40, i64 13, i64 4, !40, i64 17, i64 4, !40, i64 21, i64 4, !40, i64 25, i64 4, !40, i64 29, i64 4, !40, i64 33, i64 4, !40, i64 37, i64 4, !40, i64 41, i64 4, !40, i64 45, i64 4, !40, i64 49, i64 4, !40, i64 53, i64 4, !40, i64 57, i64 4, !40, i64 61, i64 4, !40, i64 65, i64 4, !40, i64 69, i64 4, !40, i64 73, i64 4, !40, i64 77, i64 4, !40, i64 81, i64 4, !40, i64 85, i64 4, !40, i64 89, i64 4, !40, i64 93, i64 4, !40, i64 97, i64 4, !40, i64 101, i64 4, !40, i64 105, i64 4, !40, i64 109, i64 4, !40, i64 113, i64 4, !40, i64 117, i64 4, !40, i64 121, i64 4, !40, i64 125, i64 4, !40}
!39 = !{!3, !3, i64 0}
!40 = !{!5, !5, i64 0}
```



Reviewers: asl

Subscribers: qcolombet

Differential Revision: http://reviews.llvm.org/D17441

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@261746 91177308-0d34-0410-b5e6-96231b3b80d8
dylanmckay pushed a commit that referenced this issue May 7, 2016
dylanmckay pushed a commit that referenced this issue May 7, 2016
Summary:
With the removal of support for lazy parsing of combined index summary
records (e.g. r267344), we no longer need to include the summary record
bitcode offset in the VST entries for definitions. Change the combined
index format to be similar to the per-module index format in using value
ids to cross-reference from the summary record to the VST entry (rather
than the summary record bitcode offset to cross-reference in the other
direction).

The visible changes are:
1) Add the value id to the combined summary records
2) Remove the summary offset from the combined VST records, which has
the following effects:
- No longer need the VST_CODE_COMBINED_GVDEFENTRY record, as all
  combined index VST entries now only contain the value id and
  corresponding GUID.
- No longer have duplicate VST entries in the case where there are
  multiple definitions of a symbol (e.g. weak/linkonce), as they all
  have the same value id and GUID.

An implication of #2 above is that in order to hook up an alias to the
correct aliasee based on the value id of the aliasee recorded in the
combined index alias record, we need to scan the entries in the index
for that GUID to find the one from the same module (i.e. the case where
there are multiple entries for the aliasee). But the reader no longer
has to maintain a special map to hook up the alias/aliasee.

Reviewers: joker.eph

Subscribers: joker.eph, llvm-commits

Differential Revision: http://reviews.llvm.org/D19481

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@267712 91177308-0d34-0410-b5e6-96231b3b80d8
dylanmckay pushed a commit that referenced this issue May 7, 2016
Summary: This reverts commit d88cc08.

#0 0xfed467 in llvm::ARMFrameLowering::determineCalleeSaves(llvm::MachineFunction&, llvm::BitVector&, llvm::RegScavenger*) const /mnt/b/sanitizer-buildbot2/sanitizer-x86_64-linux-bootstrap/build/llvm/lib/Target/ARM/ARMFrameLowering.cpp:1625:52
#1 0x330d4cc in (anonymous namespace)::PEI::runOnMachineFunction(llvm::MachineFunction&) /mnt/b/sanitizer-buildbot2/sanitizer-x86_64-linux-bootstrap/build/llvm/lib/CodeGen/PrologEpilogInserter.cpp:186:3
#2 0x3193e12 in llvm::MachineFunctionPass::runOnFunction(llvm::Function&) /mnt/b/sanitizer-buildbot2/sanitizer-x86_64-linux-bootstrap/build/llvm/lib/CodeGen/MachineFunctionPass.cpp:60:13
#3 0x396237d in llvm::FPPassManager::runOnFunction(llvm::Function&) /mnt/b/sanitizer-buildbot2/sanitizer-x86_64-linux-bootstrap/build/llvm/lib/IR/LegacyPassManager.cpp:1526:23
#4 0x3962a23 in llvm::FPPassManager::runOnModule(llvm::Module&) /mnt/b/sanitizer-buildbot2/sanitizer-x86_64-linux-bootstrap/build/llvm/lib/IR/LegacyPassManager.cpp:1547:16
#5 0x3963d52 in runOnModule /mnt/b/sanitizer-buildbot2/sanitizer-x86_64-linux-bootstrap/build/llvm/lib/IR/LegacyPassManager.cpp:1603:23
#6 0x3963d52 in llvm::legacy::PassManagerImpl::run(llvm::Module&) /mnt/b/sanitizer-buildbot2/sanitizer-x86_64-linux-bootstrap/build/llvm/lib/IR/LegacyPassManager.cpp:1706
#7 0x6bb910 in compileModule(char**, llvm::LLVMContext&) /mnt/b/sanitizer-buildbot2/sanitizer-x86_64-linux-bootstrap/build/llvm/tools/llc/llc.cpp:412:5
#8 0x6b3c25 in main /mnt/b/sanitizer-buildbot2/sanitizer-x86_64-linux-bootstrap/build/llvm/tools/llc/llc.cpp:218:22
#9 0x7fd4a7d37ec4 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21ec4)
#10 0x625c93 in _start (/mnt/b/sanitizer-buildbot2/sanitizer-x86_64-linux-bootstrap/build/llvm_build_msan/bin/llc+0x625c93)

Reviewers:

Subscribers:

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@268536 91177308-0d34-0410-b5e6-96231b3b80d8
dylanmckay pushed a commit that referenced this issue Sep 15, 2016
This test code previously caused a failure in the module verifier,
because SimplifyCFG created this invalid instruction, which tries to
take the address of inline asm:
  %.sink = select i1 %1, i64 ()* asm "mov $0, #1", "=r", i64 ()* asm %"mov $0, #2", "=r"

This has been fixed recently, presumably by James Molloy's patches that
re-wrote and changed parts of SimplifyCFG, so this patch just adds a
regression test for it.

Differential Revision: https://reviews.llvm.org/D24231



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@280660 91177308-0d34-0410-b5e6-96231b3b80d8
dylanmckay pushed a commit that referenced this issue Oct 27, 2016
This patch assigns cost of the scaling used in addressing.
On many ARM cores, a negated register offset takes longer than a
non-negated register offset, in a register-offset addressing mode.

For instance:

LDR R0, [R1, R2 LSL #2]
LDR R0, [R1, -R2 LSL #2]

Above, (1) takes less cycles than (2).

By assigning appropriate scaling factor cost, we enable the LLVM
to make the right trade-offs in the optimization and code-selection phase.

Differential Revision: http://reviews.llvm.org/D24857

Reviewers: jmolloy, rengolin




git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@284127 91177308-0d34-0410-b5e6-96231b3b80d8
dylanmckay pushed a commit that referenced this issue Oct 27, 2016
…ump tables

The TBB and TBH instructions in Thumb-2 allow jump tables to be compressed into sequences of bytes or shorts respectively. These instructions do not exist in Thumb-1, however it is possible to synthesize them out of a sequence of other instructions.

It turns out this sequence is so short that it's almost never a lose for performance and is ALWAYS a significant win for code size.

TBB example:
Before: lsls r0, r0, #2    After: add  r0, pc
        adr  r1, .LJTI0_0         ldrb r0, [r0, #6]
        ldr  r0, [r0, r1]         lsls r0, r0, #1
        mov  pc, r0               add  pc, r0
  => No change in prologue code size or dynamic instruction count. Jump table shrunk by a factor of 4.

The only case that can increase dynamic instruction count is the TBH case:

Before: lsls r0, r4, #2    After: lsls r4, r4, #1
        adr  r1, .LJTI0_0         add  r4, pc
        ldr  r0, [r0, r1]         ldrh r4, [r4, #6]
        mov  pc, r0               lsls r4, r4, #1
                                  add  pc, r4
  => 1 more instruction in prologue. Jump table shrunk by a factor of 2.

So there is an argument that this should be disabled when optimizing for performance (and a TBH needs to be generated). I'm not so sure about that in practice, because on small cores with Thumb-1 performance is often tied to code size. But I'm willing to turn it off when optimizing for performance if people want (also note that TBHs are fairly rare in practice!)

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@284580 91177308-0d34-0410-b5e6-96231b3b80d8
dylanmckay pushed a commit that referenced this issue Oct 27, 2016
…ump tables

The TBB and TBH instructions in Thumb-2 allow jump tables to be compressed into sequences of bytes or shorts respectively. These instructions do not exist in Thumb-1, however it is possible to synthesize them out of a sequence of other instructions.

It turns out this sequence is so short that it's almost never a lose for performance and is ALWAYS a significant win for code size.

TBB example:
Before: lsls r0, r0, #2    After: add  r0, pc
        adr  r1, .LJTI0_0         ldrb r0, [r0, #6]
        ldr  r0, [r0, r1]         lsls r0, r0, #1
        mov  pc, r0               add  pc, r0
  => No change in prologue code size or dynamic instruction count. Jump table shrunk by a factor of 4.

The only case that can increase dynamic instruction count is the TBH case:

Before: lsls r0, r4, #2    After: lsls r4, r4, #1
        adr  r1, .LJTI0_0         add  r4, pc
        ldr  r0, [r0, r1]         ldrh r4, [r4, #6]
        mov  pc, r0               lsls r4, r4, #1
                                  add  pc, r4
  => 1 more instruction in prologue. Jump table shrunk by a factor of 2.

So there is an argument that this should be disabled when optimizing for performance (and a TBH needs to be generated). I'm not so sure about that in practice, because on small cores with Thumb-1 performance is often tied to code size. But I'm willing to turn it off when optimizing for performance if people want (also note that TBHs are fairly rare in practice!)

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@284580 91177308-0d34-0410-b5e6-96231b3b80d8
dylanmckay pushed a commit that referenced this issue Nov 7, 2016
…ump tables

[Reapplying r284580 and r285917 with fix and testing to ensure emitted jump tables for Thumb-1 have 4-byte alignment]

The TBB and TBH instructions in Thumb-2 allow jump tables to be compressed into sequences of bytes or shorts respectively. These instructions do not exist in Thumb-1, however it is possible to synthesize them out of a sequence of other instructions.

It turns out this sequence is so short that it's almost never a lose for performance and is ALWAYS a significant win for code size.

TBB example:
Before: lsls r0, r0, #2    After: add  r0, pc
        adr  r1, .LJTI0_0         ldrb r0, [r0, #6]
        ldr  r0, [r0, r1]         lsls r0, r0, #1
        mov  pc, r0               add  pc, r0
  => No change in prologue code size or dynamic instruction count. Jump table shrunk by a factor of 4.

The only case that can increase dynamic instruction count is the TBH case:

Before: lsls r0, r4, #2    After: lsls r4, r4, #1
        adr  r1, .LJTI0_0         add  r4, pc
        ldr  r0, [r0, r1]         ldrh r4, [r4, #6]
        mov  pc, r0               lsls r4, r4, #1
                                  add  pc, r4
  => 1 more instruction in prologue. Jump table shrunk by a factor of 2.

So there is an argument that this should be disabled when optimizing for performance (and a TBH needs to be generated). I'm not so sure about that in practice, because on small cores with Thumb-1 performance is often tied to code size. But I'm willing to turn it off when optimizing for performance if people want (also note that TBHs are fairly rare in practice!)

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@285690 91177308-0d34-0410-b5e6-96231b3b80d8
dylanmckay pushed a commit that referenced this issue Nov 12, 2016
This optimization merges adjacent zero stores into a wider store.

e.g.,

strh wzr, [x0]
strh wzr, [x0, #2]
; becomes
str wzr, [x0]

e.g.,

str wzr, [x0]
str wzr, [x0, #4]
; becomes
str xzr, [x0]

Previously, this was only enabled for Kryo and Cortex-A57.

Differential Revision: https://reviews.llvm.org/D26396

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@286592 91177308-0d34-0410-b5e6-96231b3b80d8
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants