Nick Fitzgeraldhttp://fitzgeraldnick.com/weblog/2015-02-22T00:00:00-07:00Nick Fitzgeraldfitzgen@gmail.comhttp://fitzgeraldnick.com/Latest entries from fitzgeraldnick.com/weblog/Memory Management In Oxischeme
2015-02-22T00:00:00-07:00Nick Fitzgeraldfitzgen@gmail.comhttp://fitzgeraldnick.com/http://fitzgeraldnick.com/weblog/60/<p>I've recently been playing with the <a href="http://www.rust-lang.org/">Rust programming language</a>, and what
better way to learn a language than to implement a second language in the
language one wishes to learn?! It almost goes without saying that this second
language being implemented should be <a href="https://en.wikipedia.org/wiki/Scheme_%28programming_language%29">Scheme</a>. Thus,
<a href="https://github.com/fitzgen/oxischeme">Oxischeme</a> was born.</p>
<p>Why implement Scheme instead of some other language? Scheme is a dialect of LISP
and inherits the simple parenthesized list syntax of its LISP-y origins, called
s-expressions. Thus, writing a parser for Scheme syntax is rather easy compared
to doing the same for most other languages' syntax. Furthermore, Scheme's
semantics are also minimal. It is a small language designed for teaching, and
writing a metacircular interpreter (ie a Scheme interpreter written in Scheme
itself) takes only a few handfuls of lines of code. Finally, Scheme is a
beautiful language: its design is rooted in the elegant
<a href="https://en.wikipedia.org/wiki/Lambda_calculus">&lambda;-calculus</a>.</p>
<p>Scheme is not "close to the metal" and doesn't provide direct access to the
machine's hardware. Instead, it provides the illusion of infinite memory and its
structures are automatically <em>garbage collected</em> rather than explicitly managed
by the programmer. When writing a Scheme implementation in Scheme itself, or any
other language with garbage collection, one can piggy-back on top of the host
language's garbage collector, and use that to manage the Scheme's
structures. This is not the situation I found myself in: Rust <em>is</em> close to the
metal, and does <em>not</em> have a runtime with garbage collection built in (although
it has some other cool ideas regarding lifetimes, ownership, and when it is safe
to deallocate an object). Therefore, I had to implement garbage collection
myself.</p>
<p>Faced with this task, I had a decision to make: tracing garbage collection or
reference counting?</p>
<p>Reference counting is a technique where we keep track of the number of other
things holding references to any given object or resource. When new references
are taken, the count is incremented. When a reference is dropped, the count is
decremented. When the count reaches zero, the resource is deallocated and it
decrements the reference count of any other objects it holds a reference
to. Reference counting is great because once an object becomes unreachable, it
is reclaimed immediately and doesn't sit around consuming valuable memory space
while waiting for the garbage collector to clean it up at some later point in
time. Additionally, the reclamation process happens incrementally and program
execution doesn't halt while every object in the heap is checked for
liveness. On the negative side, reference counting runs into trouble when it
encounters cycles. Consider the following situation:</p>
<pre><code>A -&gt; B
^ |
| v
D &lt;- C
</code></pre>
<p>A, B, C, and D form a cycle and all have a reference count of one. Nothing from
outside of the cycle holds a reference to any of these objects, so it should be
safe to collect them. However, because each reference count will never get
decremented to zero, none of these objects will be deallocated. In practice, the
programmer must explicitly use (potentially unsafe) weak references, or the
runtime must provide a means for detecting and reclaiming cycles. The former
defeats the general purpose, don't-worry-about-it style of managed memory
environments. The latter is equivalent to implementing a tracing collector in
addition to the existing reference counting memory management.</p>
<p>Tracing garbage collectors start from a set of <em>roots</em> and recursively traverse
object references to discover the set of live objects in the heap graph. Any
object that is not an element of the live set cannot be used again in the
future, because the program has no way to refer to that object. Therefore, the
object is available for reclaiming. This has the advantage of collecting dead
cycles, because if the cycle is not reachable from the roots, then it won't be
in the live set. The cyclic references don't matter because those edges are
never traversed. The disadvantage is that, without a lot of hard work, when the
collector is doing its bookkeeping, the program is halted until the collector is
finished analyzing the whole heap. This can result in long, unpredictable GC
pauses.</p>
<p>Reference counting is to tracing as yin is to yang. The former operates on dead,
unreachable objects while the latter operates on live, reachable things. Fun
fact: every high performance GC algorithm (such as generational GC or reference
counting with "purple" nodes and trial deletion) uses a mixture of both, whether
it appears so on the surface or not. "A Unified Theory of Garbage Collection" by
Bacon, et all discusses this in depth.</p>
<p>I opted to implement a tracing garbage collector for Oxischeme. In particular, I
implemented one of the simplest GC algorithms: stop-the-world
mark-and-sweep. The steps are as follows:</p>
<ol>
<li>Stop the Scheme program execution.</li>
<li>Mark phase. Trace the live heap starting from the roots and add every
reachable object to the <code>marked</code> set.</li>
<li>Sweep phase. Iterate over each object <code>x</code> in the heap:
<ul>
<li>If <code>x</code> is an element of the <code>marked</code> set, continue.</li>
<li>If <code>x</code> is <em>not</em> an element of the <code>marked</code> set, reclaim it.</li>
</ul></li>
<li>Resume execution of the Scheme program.</li>
</ol>
<p>Because the garbage collector needs to trace the complete heap graph, any
structure that holds references to a garbage collected type must participate in
garbage collection by tracing the GC things it is holding alive. In Oxischeme,
this is implemented with the <a href="https://fitzgen.github.io/oxischeme/oxischeme/heap/trait.Trace.html"><code>oxischeme::heap::Trace</code></a> trait,
whose implementation requires a <code>trace</code> function that returns an iterable of
<code>GcThing</code>s:</p>
<pre><code>pub trait Trace {
/// Return an iterable of all of the GC things referenced by
/// this structure.
fn trace(&amp;self) -&gt; IterGcThing;
}
</code></pre>
<p>Note that Oxischeme separates tracing (generic heap graph traversal) from
marking (adding live nodes in the heap graph to a set). This enables using
<code>Trace</code> to implement other graph algorithms on top of the heap graph. Examples
include computing dominator trees and retained sizes of objects, or finding the
set of retaining paths of an object that you expected to be reclaimed by the
collector, but hasn't been.</p>
<p>If we were to introduce a <code>Trio</code> type that contained three cons cells, we would
implement tracing like this:</p>
<pre><code>struct Trio {
first: ConsPtr,
second: ConsPtr,
third: ConsPtr,
}
impl Trace for Trio {
fn trace(&amp;self) -&gt; IterGcThing {
let refs = vec!(GcThing::from_cons_ptr(self.first),
GcThing::from_cons_ptr(self.second),
GcThing::from_cons_ptr(self.third));
refs.into_iter()
}
}
</code></pre>
<p>What causes a garbage collection? As we allocate GC things, GC pressure
increases. Once that pressure crosses a threshold &mdash; BAM! &mdash; a
collection is triggered. Oxischeme's pressure application and threshold are very
naive at the moment: every N allocations a collection is triggered, regardless
of size of the heap or size of individual allocations.</p>
<p>A <em>root</em> is an object in the heap graph that is known to be live and
reachable. When marking, we start tracing from the set of roots. For example, in
Oxischeme, the global environment is a GC root.</p>
<p>In addition to permanent GC roots, like the global environment, sometimes it is
necessary to temporarily root GC things referenced by pointers on the
stack. Garbage collection can be triggered by any allocation, and it isn't
always clear which Rust functions (or other functions called by those functions,
or even other functions called by those functions called from the first
function, and so on) might allocate a GC thing, triggering collection. The
situation we want to avoid is a Rust function using a temporary variable that
references a GC thing, then calling another function which triggers a collection
and collects the GC thing that was referred to by the temporary variable. That
results in the temporary variable becoming a dangling pointer. If the Rust
function accesses it again, that is Undefined Behavior: it might still get the
value it was pointing at, or it might be a segfault, or it might be a freshly
allocated value being used by something else! Not good!</p>
<pre><code>let a = pointer_to_some_gc_thing;
function_which_can_trigger_gc();
// Oops! A collection was triggered and dereferencing this
// pointer leads to Undefined Behavior!
*a;
</code></pre>
<p>There are two possible solutions to this problem. The first is <em>conservative</em>
garbage collection, where we walk the native stack and if any value on the stack
looks like it might be a pointer and if coerced to a pointer happens to point to
a GC thing in the heap, we assume that it <em>is</em> in fact a pointer. Under this
assumption, it isn't safe to reclaim the object pointed to, and so we treat that
GC thing a root. Note that this strategy is simple and easy to retrofit because
it doesn't involve changes in any code other than adding the stack scanning, but
it results in false positives. The second solution is <em>precise rooting</em>. With
precise rooting, it is the responsibility of the Rust function's author to
explicitly root and unroot pointers to GC things used in variables on the
stack. The advantage this provides is that there are no false positives: you
know exactly which stack values are pointers to GC things. The disadvantage is
the requirement of explicitly telling the GC about every pointer to a GC thing
you ever reference on the stack.</p>
<p>Almost every modern, high performance tracing collector for managed runtimes
uses precise rooting because it is a prerequisite<sup>*</sup> of a <em>moving</em>
collector: a GC that relocates objects while performing collection. Moving
collectors are desirable because they can compact the heap, creating a smaller
memory footprint for programs and better cache locality. They can also implement
<em>pointer bumping</em> allocation, that is both simpler and faster than maintaining a
free list. Finally, they can split the heap into generations. Generational GCs
gain performance wins from the empirical observation that most allocations are
short lived, and those objects that are most recently allocated are most likely
to be garbage, so we can focus the GC's efforts on them to get the most bang for
our buck. Precise rooting is a requirement for a moving collector because it has
to update all references to point to the new address of each moved GC thing. A
conservative collector doesn't know for sure if a given value on the stack is a
reference to a GC thing or not, and if the value just so happens not to be a
reference to a GC thing (it is a false positive), and the collector "helpfully"
updates that value to the moved address, then the collector is introducing
migraine-inducing bugs into the program execution.</p>
<p><small><sup>*</sup> Technically, there do exist some moving and generational
collectors that are "mostly copying" and conservatively mark the stack but
precisely mark the heap. These collectors only move objects which are not
conservatively reachable.</small></p>
<p>Oxischeme uses precise rooting, but is not a moving GC (yet). Precise rooting is
implemented with the <a href="https://fitzgen.github.io/oxischeme/oxischeme/heap/struct.Rooted.html"><code>oxischeme::heap::Rooted&lt;T&gt;</code></a> smart pointer
<a href="https://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization">RAII</a> type, which roots its referent upon construction and unroots it
when the smart pointer goes out of scope and is dropped.</p>
<p>Using precise rooting and <code>Rooted</code>, we can solve the dangling stack pointer
problem like this:</p>
<pre><code>{
// The pointed to GC thing gets rooted when wrapped
// with `Rooted`.
let a = Rooted::new(heap, pointer_to_some_gc_thing);
function_which_can_trigger_gc();
// Dereferencing `a` is now safe, because the referent is
// a GC root, and can't be collected!
*a;
}
// `a` is now out of scope, and its referent is unrooted.
</code></pre>
<p>With all of that out of the way, here is the implementation of our
mark-and-sweep collector:</p>
<pre><code>impl Heap {
pub fn collect_garbage(&amp;mut self) {
self.reset_gc_pressure();
// First, trace the heap graph and mark everything that
// is reachable.
let mut pending_trace = self.get_roots();
while !pending_trace.is_empty() {
let mut newly_pending_trace = vec!();
for thing in pending_trace.drain() {
if !thing.is_marked() {
thing.mark();
for referent in thing.trace() {
newly_pending_trace.push(referent);
}
}
}
pending_trace.append(&amp;mut newly_pending_trace);
}
// Second, sweep each `ArenaSet`.
self.strings.sweep();
self.activations.sweep();
self.cons_cells.sweep();
self.procedures.sweep();
}
}
</code></pre>
<p>Why do we have four calls to <code>sweep</code>, one for each type that Oxischeme
implements? To explain this, first we need to understand Oxischeme's allocation
strategy.</p>
<p>Oxischeme does not allocate each individual object directly from the operating
system. In fact, most Scheme "allocations" do not actually perform any
allocation from the operating system (eg, call <code>malloc</code> or
<code>Box::new</code>). Oxischeme uses a set of <a href="https://fitzgen.github.io/oxischeme/oxischeme/heap/struct.Arena.html"><code>oxischeme::heap::Arena</code>s</a>, each of
which have a preallocated object pool with each item in the pool either being
used by live GC things, or waiting to be used in a future allocation. We keep
track of an <code>Arena</code>'s available objects with a "free list" of indices into its
pool.</p>
<pre><code>type FreeList = Vec&lt;usize&gt;;
/// An arena from which to allocate `T` objects from.
pub struct Arena&lt;T&gt; {
pool: Vec&lt;T&gt;,
/// The set of free indices into `pool` that are available
/// for allocating an object from.
free: FreeList,
/// During a GC, if the nth bit of `marked` is set, that
/// means that the nth object in `pool` has been marked as
/// reachable.
marked: Bitv,
}
</code></pre>
<p>When the Scheme program allocates a new object, we remove the first entry from
the free list and return a pointer to the object at that entry's index in the
object pool. If the every <code>Arena</code> is at capacity (ie, its free list is empty), a
new <code>Arena</code> is allocated from the operating system and its object pool is used
for the requested Scheme allocation.</p>
<pre><code>impl&lt;T: Default&gt; ArenaSet&lt;T&gt; {
pub fn allocate(&amp;mut self) -&gt; ArenaPtr&lt;T&gt; {
for arena in self.arenas.iter_mut() {
if !arena.is_full() {
return arena.allocate();
}
}
let mut new_arena = Arena::new(self.capacity);
let result = new_arena.allocate();
self.arenas.push(new_arena);
result
}
}
impl&lt;T: Default&gt; Arena&lt;T&gt; {
pub fn allocate(&amp;mut self) -&gt; ArenaPtr&lt;T&gt; {
match self.free.pop() {
Some(idx) =&gt; {
let self_ptr : *mut Arena&lt;T&gt; = self;
ArenaPtr::new(self_ptr, idx)
},
None =&gt; panic!("Arena is at capacity!"),
}
}
}
</code></pre>
<p>For simplicity, Oxischeme has separate arenas for separate types of
objects. This sidesteps the problem of finding an appropriately sized free block
of memory when allocating different sized objects from the same pool, the
fragmentation that can occur because of that, and lets us use a plain old vector
as the object pool. However, this also means that we need a separate
<code>ArenaSet&lt;T&gt;</code> for each <code>T</code> object that a Scheme program can allocate and why
<code>oxischeme::heap::Heap::collect_garbage</code> has four calls to <code>sweep()</code>.</p>
<p>During the sweep phase of Oxischeme's garbage collector, we return the entries
of any dead object back to the free list. If the <code>Arena</code> is empty (ie, the free
list is full) then we return the <code>Arena</code>'s memory to the operating system. This
prevents retaining the peak amount of memory used for the rest of the program
execution.</p>
<pre><code>impl&lt;T: Default&gt; Arena&lt;T&gt; {
pub fn sweep(&amp;mut self) {
self.free = range(0, self.capacity())
.filter(|&amp;n| {
!self.marked.get(n)
.expect("marked should have length == self.capacity()")
})
.collect();
// Reset `marked` to all zero.
self.marked.set_all();
self.marked.negate();
}
}
impl&lt;T: Default&gt; ArenaSet&lt;T&gt; {
pub fn sweep(&amp;mut self) {
for arena in self.arenas.iter_mut() {
arena.sweep();
}
// Deallocate any arenas that do not contain any
// reachable objects.
self.arenas.retain(|a| !a.is_empty());
}
}
</code></pre>
<p>This concludes our tour of Oxischeme's current implementation of memory
management, allocation, and garbage collection for Scheme programs. In the
future, I plan to make Oxischeme's collector a moving collector, which will pave
the way for a compacting and generational GC. I might also experiment with
incrementalizing marking for lower latency and shorter GC pauses, or making
sweeping lazy. Additionally, I intend to
<a href="https://github.com/fitzgen/oxischeme/issues/13">declare to the rust compiler that operations on un-rooted GC pointers unsafe</a>,
but I haven't settled on an implementation strategy yet. I would also like to
experiment with writing a syntax extension for the Rust compiler so that it can
derive <code>Trace</code> implementations, and they don't need to be written by hand.</p>
<p><em>Thanks to <a href="http://tromey.com/blog/">Tom Tromey</a> and
<a href="https://twitter.com/zii">Zach Carter</a> for reviewing drafts.</em></p>
<h2>References</h2>
<ul>
<li><p><a href="https://github.com/fitzgen/oxischeme">Oxischeme, a Scheme implementation in Rust</a></p></li>
<li><p><a href="https://en.wikipedia.org/wiki/Tracing_garbage_collection">Tracing Garbage Collection &mdash; Wikipedia</a></p></li>
<li><p><a href="http://www.cs.virginia.edu/~cs415/reading/bacon-garbage.pdf">A Unified Theory of Garbage Collection</a> by Bacon, et
all. Great paper!</p></li>
<li><p><a href="http://www.evanjones.ca/memoryallocator/">Improving Python's Memory Allocator</a> by Evan Jones. This was a nice
resource when planning Oxischeme's allocation strategy. Interesting side note:
CPython uses reference counting and a cycle detector rather than a tracing
collector.</p></li>
</ul>
Naming `Eval` Scripts With The `//# Sourceurl` Directive
2014-12-05T00:00:00-07:00Nick Fitzgeraldfitzgen@gmail.comhttp://fitzgeraldnick.com/http://fitzgeraldnick.com/weblog/59/<p>In Firefox 36, SpiderMonkey (Firefox's JavaScript engine) now supports the
<em>//#&nbsp;sourceURL=my-display-url.js</em> directive. This allows developers to give
a name to a script created by <code>eval</code> or <code>new Function</code>, and get better stack
traces.</p>
<p>To demonstrate this, let's use a minimal version of
<a href="http://ejohn.org/blog/javascript-micro-templating/">John Resig's micro templater</a>. The
micro templater compiles template strings into JS source code that it passes to
<code>new Function</code>, thus transforming the template string into a function.</p>
<pre><code>function tmpl(str) {
return new Function("obj",
"var p=[],print=function(){p.push.apply(p,arguments);};" +
// Introduce the data as local variables using with(){}
"with(obj){p.push('" +
// Convert the template into pure JavaScript
str
.replace(/[\r\t\n]/g, " ")
.split("&lt;%").join("\t")
.replace(/((^|%&gt;)[^\t]*)'/g, "$1\r")
.replace(/\t=(.*?)%&gt;/g, "',$1,'")
.split("\t").join("');")
.split("%&gt;").join("p.push('")
.split("\r").join("\\'")
+ "');}return p.join('');");
};
</code></pre>
<p>The details of how the template is converted into JavaScript source code isn't
of import; what is important is that it dynamically creates new scripts via code
evaluated in <code>new Function</code>.</p>
<p>We can define a new templater function:</p>
<pre><code>var hello = tmpl("&lt;h1&gt;Hello, &lt;%=name%&gt;&lt;/h1&gt;");
</code></pre>
<p>And use it like this:</p>
<pre><code>hello({ name: "World!" });
// "&lt;h1&gt;Hello, World!&lt;/h1&gt;"
</code></pre>
<p>When we get an error, SpiderMonkey will generate a name for the <code>eval</code>ed (or in
this case, <code>new Function</code>ed) script based on the location where the call to
<code>eval</code> (or <code>new Function</code>) occurred. For our concrete example, this is the
generated name for the hello templater function's frame:</p>
<pre><code>file:///Users/fitzgen/scratch/foo.js line 2 &gt; Function
</code></pre>
<p>And here it is in the context of an error with the whole stack trace:</p>
<pre><code>hello({ name: Object.create(null) });
// TypeError: can't convert Object to string
// Stack trace:
// anonymous@file:///Users/fitzgen/scratch/foo.js line 2 &gt; Function:1:107
// @file:///Users/fitzgen/scratch/foo.js:28:3
</code></pre>
<p>Despite being a solid improvement over just "eval frame" or something of that
sort, these stack traces can still be difficult to read. If there are many
different templater functions, the value of the <code>eval</code> script's introduction
location is further diminished. It is difficult to determine which of the many
functions created by <code>tmpl</code> contains the thrown error, because they all end up
with the same name, because they were all created at the same location.</p>
<p>We can improve this situation with the <code>//# sourceURL</code> directive.</p>
<p>Consider this version of the <code>tmpl</code> function adapted to use the <code>//# sourceURL</code>
directive:</p>
<pre><code>function tmpl(name, str) {
return new Function("obj",
"var p=[],print=function(){p.push.apply(p,arguments);};" +
// Introduce the data as local variables using with(){}
"with(obj){p.push('" +
// Convert the template into pure JavaScript
str
.replace(/[\r\t\n]/g, " ")
.split("&lt;%").join("\t")
.replace(/((^|%&gt;)[^\t]*)'/g, "$1\r")
.replace(/\t=(.*?)%&gt;/g, "',$1,'")
.split("\t").join("');")
.split("%&gt;").join("p.push('")
.split("\r").join("\\'")
+ "');}return p.join('');"
+ "//# sourceURL=" + name);
};
</code></pre>
<p>Note that the function takes a new parameter called <code>name</code> and appends <code>//#
sourceURL=&lt;name&gt;</code> to the end of the generated JS code passed to <code>new Function</code>.</p>
<p>With this new version of <code>tmpl</code>, we create our templater function like this:</p>
<pre><code>var hello = tmpl("hello.template", "&lt;h1&gt;Hello &lt;%=name%&gt;&lt;/h1&gt;");
</code></pre>
<p>Now SpiderMonkey will use the name given by the <code>//# sourceURL</code> directive,
instead of using a name based on the introduction site of the script:</p>
<pre><code>hello({ name: Object.create(null) });
// TypeError: can't convert Object to string
// Stack trace:
// anonymous@hello.template:1:107
// @file:///Users/fitzgen/scratch/foo.js:25:3
</code></pre>
<p>Giving the eval script a name makes it easier for us to debug errors originating
from it, and we can give a different name to different scripts created at the
same location.</p>
<p>The <code>//# sourceURL</code> directive is also particularly useful for dynamic code- and
module-loaders, which fetch source text over the network and then <code>eval</code> it.</p>
<p>Additionally, <strike>in Firefox 37</strike>, <code>eval</code>ed sources with a <code>//#
sourceURL</code> directive will be debuggable and labeled with the name specified by
the directive in the debugger. (<strong>Update</strong>: <code>//# sourceURL</code> support in the
debugger is also available in Firefox 36!)</p>
Wu.Js 2.0
2014-08-07T00:00:00-07:00Nick Fitzgeraldfitzgen@gmail.comhttp://fitzgeraldnick.com/http://fitzgeraldnick.com/weblog/58/<p>On May 21st, 2010, I started experimenting with lazy, functional streams in
JavaScript with a library I named after the Wu-Tang Clan.</p>
<pre><code>commit 9d3c5b19a088f6e33888c215f44ab59da4ece302
Author: Nick Fitzgerald &lt;fitzgen@gmail.com&gt;
Date: Fri May 21 22:56:49 2010 -0700
First commit
</code></pre>
<p>Four years later, the feature-complete, partially-implemented, and
soon-to-be-finalized ECMAScript 6 supports lazy streams in the form of
generators and its iterator protocol. Unfortunately, ES6 iterators are missing
the higher order functions you expect: map, filter, reduce, etc.</p>
<p>Today, I'm happy to announce the release of <code>wu.js</code> version 2.0, which has been
completely rewritten for ES6.</p>
<p><code>wu.js</code> aims to provide higher order functions for ES6 iterables. Some of them
you already know (<code>filter</code>, <code>some</code>, <code>reduce</code>) and some of them might be new to
you (<code>reductions</code>, <code>takeWhile</code>). <code>wu.js</code> works with <em>all</em> ES6 iterables,
including <code>Array</code>s, <code>Map</code>s, <code>Set</code>s, and generators you write yourself. You don't
have to wait for ES6 to be fully implemented by every JS engine, <code>wu.js</code> can be
compiled to ES5 with the Tracuer compiler.</p>
<p>Here's a couple small examples:</p>
<pre><code>const factorials = wu.count(1).reductions((last, n) =&gt; last * n);
// (1, 2, 6, 24, ...)
const isEven = x =&gt; x % 2 === 0;
const evens = wu.filter(isEven);
evens(wu.count());
// (0, 2, 4, 6, ...)
</code></pre>
<p><a href="http://fitzgen.github.io/wu.js/">Check out the <code>wu.js</code> documentation here.</a></p>
Come Work With Me On Firefox Developer Tools
2014-07-08T00:00:00-07:00Nick Fitzgeraldfitzgen@gmail.comhttp://fitzgeraldnick.com/http://fitzgeraldnick.com/weblog/57/<p>My team at Mozilla, the half of the larger devtools team that works on
JavaScript and performance tools, is looking to hire another software engineer.</p>
<p>We have members of the devtools team in our San Francisco, London, Vancouver,
Paris, Toronto, and Portland offices, but many also work remotely.</p>
<p>We are responsible for writing the full stack of the tools we create, from the
C++ platform APIs exposing SpiderMonkey and Gecko internals, to the
JavaScript/CSS/HTML based frontend that you see when you open the Firefox
Developer Tools.</p>
<p>Some of the things we're working on:</p>
<ul>
<li><p>A JavaScript debugger</p></li>
<li><p>A performance tool that incorporates a sampling CPU-profiler, platform events
tracing, and information from SpiderMonkey's Just-In-Time compiler</p></li>
<li><p>An allocations and heap usage profiler (this is what I'm working on, and <a href="http://fitzgeraldnick.com/weblog/54/">I wrote
about it here</a>)</p></li>
<li><p>A WebGL shader live-editor, and a canvas debugger to step through individual
draw calls</p></li>
</ul>
<p>One of the most important things for me is that every line of code we write at
Mozilla is Free and Open Source from day one, and we're dedicated to keeping the
web open.</p>
<p><a href="http://jobvite.com/m?3pvexgwq">Apply here!</a></p>
Debugging Web Performance With Firefox Devtools - Velocity 2014
2014-06-26T00:00:00-07:00Nick Fitzgeraldfitzgen@gmail.comhttp://fitzgeraldnick.com/http://fitzgeraldnick.com/weblog/56/<p>On Tuesday, June 3rd, 2014, I gave a presentation on debugging web performance
with <a href="https://twitter.com/FirefoxDevTools">Firefox DevTools</a> to the Velocity Conf 2014, Santa Clara. I'm not
sure how useful the slides are without me talking, but here they are:</p>
<h3><a href="http://media.fitzgeraldnick.com/velocity2014/presentation/html5/template.html?full#cover">Debugging Web Performance with Firefox DevTools - Velocity 2014</a></h3>