Tag about-compilers
26 bookmarks have this tag.
Stuff about how compilers work. Interpreters, runtimes and other stuff like that also goes here.
26 bookmarks have this tag.
Stuff about how compilers work. Interpreters, runtimes and other stuff like that also goes here.
a fun implementation of *Kanren with nice Haskell interoperability. showcases some fun Haskell featues.
a really cool post explaining design of gc-arena
Latency, throughput and port usage of x86 instructions.
With some partial evaluation and specialization hints, it is possible to get pretty decent speedups on interpreters by turning them into compilers.
The Rust language has incrementally grown a set of patterns to support control-flow effects including error handling, iteration, and asynchronous I/O. In The registers of Rust, boats lays out four aspects of this pattern shared by Rust’s three effects. Today these effects are typically used in isolation, or at most combined in bespoke ways, but the Rust project has been working on ways to integrate them more deeply with each other, such as async gen blocks.
The theory of algebraic effects and handlers has explored this design space and offers answers to many of the questions that the Rust project has encountered during this work. This post will relate the patterns employed by Rust to the terminology and semantics of effects, to help build a shared vocabulary and understanding of the implications of combining multiple effects.
This post is all about speculative compilation, or just speculation for short, in the context of the JavaScriptCore virtual machine.
Comparison between different runtimes and with native code.
The generic dilemma is this: do you want slow programmers, slow compilers and bloated binaries, or slow execution times?
No generics / monomorphization / dynamic dispatch
A very nice paper about fuzzing Rust compiler by generating custom MIR. Found some bugs in both rustc and LLVM, but notably not in Cranelift.
What if all software suddenly disappeared? What's the minimum you'd need to bootstrap a practical system? I decided to start with a one sector (512-byte) seed and find out how far I can get.
A paper about message-passing memory allocator: could be useful for actor systems.
MLIR is an interesting LLVM compiler framework thingy that uses LLVM for multiple representations instead of just high-level assembly.
This tutorial runs through the implementation of a basic toy language on top of MLIR. The goal of this tutorial is to introduce the concepts of MLIR; in particular, how dialects can help easily support language specific constructs and transformations while still offering an easy path to lower to LLVM or other codegen infrastructure. This tutorial is based on the model of the LLVM Kaleidoscope Tutorial.
The paper’s claim:
False.
Compilers do optimize atomics, memory accesses around atomics, and utilize architecture-specific knowledge. This paper illustrates a few such optimizations, and discusses their implications.
Interestingly, none of the optimizations proposed in the paper actually work on GCC or Clang.
Abstract: Polymorphic inline caches (PICs) provide a new way to reduce the overhead of polymorphic message sends by extending inline caches to include more than one cached lookup result per call site. For a set of typical object-oriented SELF programs, PICs achieve a median speedup of 11%.
Introspectable and extensible IDEs with logic programming
Explanation of provenance, uninitialized memory and stuff like that. Useful as an entry point into realization that we don’t live in PDP-11 world anymore.
The new Riptide garbage collector in WebKit leads to a five-fold improvement in latency in the JetStream/splay-latency test.
WebKit's FTL JIT now uses a new backend on OS X — the Bare Bones Backend, or B3 for short, replaces LLVM as the low-level optimizer.
Designed to bring aggressive C-like optimizations to the largest variety of JavaScript programs.