James W. Hanlonhttp://jwhanlon.com/Wed, 05 Apr 2017 00:00:00 +0200Notes on testing random number generatorshttp://jwhanlon.com/notes-on-testing-random-number-generators.html<p>Recently I have been doing some work testing the quality of <a href="https://en.wikipedia.org/wiki/Random_number_generator">random
number generators</a>
(RNGs), so I thought I would record things that should be useful as a
reference. I won&#8217;t provide too much background here since there are many good
existing references to the theory and practice of …</p>James W. HanlonWed, 05 Apr 2017 00:00:00 +0200tag:jwhanlon.com,2017-04-05:/notes-on-testing-random-number-generators.html/computingA convolutional neural network from scratchhttp://jwhanlon.com/a-convolutional-neural-network-from-scratch.html<p>The online book &#8216;<a href="http://neuralnetworksanddeeplearning.com">Neural Networks and Deep
Learning</a>&#8216; by Michael Nielsen is an
excellent introduction to neural networks and the world of deep learning. As
the book works through the theory, it makes it concrete by explaining how the
concepts are implemented using Python. The complete Python programs are
<a href="https://github.com/mnielsen/neural-networks-and-deep-learning">available …</a></p>James W. HanlonFri, 10 Feb 2017 00:00:00 +0100tag:jwhanlon.com,2017-02-10:/a-convolutional-neural-network-from-scratch.html/machine-intelligenceReducing memory use in deep neural networkshttp://jwhanlon.com/reducing-memory-use-in-deep-neural-networks.html<p>The memory requirements for modern deep neural networks can be significant,
however memory on-chip is expensive relative to computational resources such as
integer and floating-point units, and access to external <span class="caps">DRAM</span> memory is orders
of magnitude slower. This article surveys some recent results that demonstrate
the economy of reducing memory …</p>James W. HanlonSun, 05 Feb 2017 00:00:00 +0100tag:jwhanlon.com,2017-02-05:/reducing-memory-use-in-deep-neural-networks.html/computingmachine-intelligenceMachine learning challenges for computer architecturehttp://jwhanlon.com/machine-learning-challenges-for-computer-architecture.html<p>Neural networks have become a hot topic in computing and their development is
progressing rapidly. They have a long history with some of the first designs
proposed in the 1940s. But despite being an active area of research since
then, it has not been until the last five to ten …</p>James W. HanlonFri, 04 Nov 2016 00:00:00 +0100tag:jwhanlon.com,2016-11-04:/machine-learning-challenges-for-computer-architecture.html/machine-intelligencecomputingThe XMOS XMP-64http://jwhanlon.com/the-xmos-xmp-64.html<p>The <span class="caps">XMP</span>-64 is an experimental single-board distributed-memory parallel computer
with 512 hardware threads and is programmable with a C-like language. It was
developed by <a class="reference external" href="https://www.xmos.com"><span class="caps">XMOS</span></a> in 2009 to demonstrate the
scalablility of the <a class="reference external" href="https://en.wikipedia.org/wiki/XCore_XS1"><span class="caps">XS1</span> architecture</a>. Since then it has been
<a class="reference external" href="https://www.xmos.com/published/xmp-64-end-life">discontinued</a> but remains a
fascinating device from the point …</p>James W. HanlonThu, 04 Dec 2014 00:00:00 +0100tag:jwhanlon.com,2014-12-04:/the-xmos-xmp-64.html/computingelectronicsScalable abstractions for general-purpose parallel computationhttp://jwhanlon.com/scalable-abstractions-for-general-purpose-parallel-computation.html<p>An overvirew of my PhD&nbsp;thesis.</p>James W. HanlonWed, 01 Oct 2014 00:00:00 +0200tag:jwhanlon.com,2014-10-01:/scalable-abstractions-for-general-purpose-parallel-computation.html/computing