poita.org

# The Condenser Part 1

Last night I rewatched Joe Armstrong’s excellent keynote from this year’s Strange Loop conference. He begins the talk with a fairly standard, but humorous lament about the sorry state of software, then goes on to relate this to some physical quantities and limits. He finishes by suggesting some possible directions we can look to improve things.

One of his suggestions is to build something called “The Condenser”, which is a program that takes all programs in the world, and condenses them down in such a way that all redundancy is removed. He breaks this task down into two parts:

He also explains the obvious, and easy way to do Part 1.

At this point I remembered that I hadn’t written a lot of D recently, and although Joe is talking about condensing all files in the world, I was kind of curious how much duplication there was just on my laptop. How many precious bytes am I wasting?

It turns out that The Condenser Part 1 is very easy to write in D, and even quite elegant.

import std.algorithm;
import std.digest.sha;
import std.file;
import std.parallelism;
import std.stdio;
import std.typecons;

auto process(DirEntry e) {
ubyte[4 << 10] buf = void;  // 4kb buffer, uninitialized
auto sha1 = File(e.name).byChunk(buf[]).digest!SHA1;
return tuple(e.name, sha1, e.size);
}

void main(string[] args) {
string[ubyte[20]] hash2file;  // hash table of SHA1 -> filename
auto files = dirEntries(args[1], SpanMode.depth, false)
.filter!(e => e.isFile);
if (string* original = t[1] in hash2file) {
writefln("%s %s duplicates %s", t[2], t[0], *original);
} else {
hash2file[t[1]] = t[0];
}
}
}

I’ll explain the code a little, starting from main. The files variable uses dirEntries and filter to produce a list of files (the filter is necessary since dirEntries also returns directories). dirEntries is a lazy range, so the directories are only iterated as necessary. Similarly, filter creates another lazy range, which removes non-file entries.

The foreach loop corresponds to Joe’s do loop. While I could have just iterated over files directly, I instead iterate over a taskPool.map, a semi-lazy map from David Simcha’s std.parallelism that does the processing in parallel on multiple threads. It is semi-lazy because it eagerly takes a chunk of elements at a time from the front of the range, processes them in parallel, then returns them back to the caller to consume. The task pool size defaults to the number of CPU cores on your machine, but this can be configured.

The process function is where the hashing is done. It takes a DirEntry, allocates an uninitialized (= void) 4kb buffer on the stack and uses that buffer to read the file 4kb at a time and build up the SHA1. Again, byChunk returns a lazy range, and digest!SHA1 consumes it. Lazy ranges are very common in idiomatic D, especially in these kinds of stream processing type applications. It’s worth getting familiar with the common ranges in std.algorithm and std.range.

I ran the program over my laptop’s home directory. Since the duplicate file sizes are all in the first column, I can just use awk to sum them up and find the total waste:

$sudo ./the-condenser . | awk '{s+=$1} END {print s}'
2347353793

That’s 2.18Gb of duplicate files, which is around 10% of my home dir.

In some cases the duplicates are just files that I have absentmindedly downloaded twice. In other cases it’s duplicate music that I have both in iTunes and Google Play. For some reason, the game Left 4 Dead 2 seems to have a lot of duplicate audio files.

So, in theory, I could save 10% by deduplicating all these files, but it’s not a massive amount of waste. It doesn’t seem worth the effort trying to fix this. Perhaps it would be nice for the OS or file system to solve this automatically, but the implementation would be tricky, and still maybe not worth it.

Part 2 of the Condenser is to merge all similar files using least compression difference. This is decidedly more difficult, since it relies on near-perfect compression, which is apparently an AI-hard problem. Unfortunately, solving AI is out of scope for this post, so I’ll leave that for another time :-)

# Cube Vertex Numbering

If you do any sort of graphics, physics, or 3D programming in general then at some point you are going to be working with cubes, and at some point you are going to need to assign a number to each vertex in those cubes. Usually this is because you need to store them in an array, or maybe you are labelling the children of an octree, but often you just need some consistent way of identifying vertices.

For whatever reason, a surprisingly common way of numbering vertices is like this, with the bottom vertices numbered in counter-clockwise order followed by the top vertices in counter-clockwise order (warning, incoming ASCII art):

        7-------6
/|      /|
4-+-----5 |
| |     | |   y
| 3-----+-2   | z
|/      |/    |/
0-------1     +--x


Depending on what you are doing, there’s generally only a few applications of the numbering system:

• Converting a number to a coordinate.
• Converting a coordinate to a number.

Unfortunately, this common numbering scheme makes none of these tasks easy, and you have no choice but to produce tables of coordinates and adjacency lists for each vertex number. This is tedious, error-prone, and completely unnecessary.

Here is a better scheme:

        3-------7
/|      /|
2-+-----6 |
| |     | |   y
| 1-----+-5   | z
|/      |/    |/
0-------4     +--x


This numbering uses the Lexicographic Ordering of the vertices, with 0 being the lexicographically first vertex, and 7 being the last.

Number
(decimal)
Number
(binary)
Coordinate
00000, 0, 0
10010, 0, 1
20100, 1, 0
30110, 1, 1
41001, 0, 0
51011, 0, 1
61101, 1, 0
71111, 1, 1

I’ve included the binary representation of the vertex number in table above, because it illuminates the key property of this scheme: the coordinate of a vertex is equal to the binary representation its vertex number.

\begin{aligned} coordinate(n) &= ((n \gg 2) \mathrel{\&} 1, (n \gg 1) \mathrel{\&} 1, (n \gg 0) \mathrel{\&} 1) \\ number(x, y, z) &= (x \ll 2) \mathrel{|} (y \ll 1) \mathrel{|} (z \ll 0) \end{aligned}

(Of course, the zero shifts are unnecessary, but I’ve added them to highlight the symmetry.)

This numbering scheme also makes adjacent vertex enumeration easy. As an adjacent vertex is just a vertex that differs along one axis, we just need to flip each bit using XOR to get the adjacent vertices:

\begin{aligned} adj_x(n) &= n \oplus 100_2 \\ adj_y(n) &= n \oplus 010_2 \\ adj_z(n) &= n \oplus 001_2 \end{aligned}

It should be clear that this numbering scheme trivially extends into higher dimension hypercubes by simply using more bits in the representation, and extending the formulae in obvious ways.

So there you have it. Whenever you are looking to number things and aren’t sure what order to use, start with lexicographic. It usually has nice encoding and enumeration properties, and if not then it is probably no worse than any other ordering.

# Range-Based Graph Search in D

I’ve just made my first commit of a range-based graph library for D. At the moment it only contains a few basic search algorithms (DFS, BFS, Dijsktra, and A*), but I wanted to write a bit about the design of the library, and how you can use it.

As a bit of a teaser, the snippet below shows how you can use the library to solve those word change puzzles (you know, the ones when you have to get from one word to another by only changing one letter at a time?)

string[] words = File("/usr/share/dict/words")
.byLine
.filter!(w => w.length == 5)
.map!"a.idup"
.array();
enum string start = "hello";
enum string end = "world";
static d(string a, string b) { return zip(a, b).count!"a[0] != a[1]"; }
auto graph = implicitGraph!(string, u => words.filter!(v => d(u, v) == 1));
graph.aStarSearch!(v => d(v, end))(start).find(end).path.writeln();

This promptly produces the desired result:

["hello", "hollo", "holly", "molly", "mouly", "mould", "would", "world"]

The last two lines are the interesting part. The first of those lines defines our graph as an implicit graph of words connected by one-letter changes. The second line performs the A* search and writes the resultant path to stdout.

On the second-last line, graph is defined as an implicitGraph. An implicit graph is a graph that is represented by functions rather than in-memory data structures. Instead of having an array of adjacency lists, we just define a function that returns a range of adjacent words (that’s the second parameter to implicitGraph). This representation saves us the tedium of generating the graph beforehand, and saves your RAM from having to store it unnecessarily. It also makes for succinct graph definitions!

The last line is where the algorithm is called. Unlike most other graph libraries, stdex.graph implements graph searches as ranges – iterations of the graph vertices. aStarSearch returns a range of vertices in the order they would be visited by the A* search algorithm (parameterized by the A* heuristic). By implementing the search as a range, the user can take advantage of Phobos’ multitude of range algorithms.

For instance, suppose you want to count the number of nodes searched by the algorithm until it reaches the target node. For this task, you can just use countUntil from std.algorithm.

auto search = graph.aStarSearch!(v => d(v, end))(start);
writefln("%d nodes visited", search.countUntil(end));

For tuning and debugging, it might be useful to print out the nodes visited by the algorithm as it runs.

foreach (node; search.until(end))
writeln(node);

When no path is available, graph search algorithms typically end up searching the entire vertex space. It is common to cut-off the search after a certain threshold of nodes have been searched. This can be accomplished by takeing only so many nodes from the search.

auto result = search.take(50).find(end);
if (result.empty)
writeln("Not found within 50 nodes");

Similarly, you could have the search continue until 10 seconds have elapsed.

alias now = TickDuration.currSystemTick;
auto t = now;
auto result = search.until!(_ => (now - t).seconds > 10).find(end);
if (result.empty)
writeln("Not found in 10 seconds");

The beauty of all this is that none of these features are part of the graph library – they come for free as a side-effect of being a range.

One unsolved part of the design for the graph searches is how to handle visitation. The Boost Graph Library solves this by having users implement a visitor object, which has to define a suite of callback methods, one for each event (vertex discovered, examine edge, etc.) This is certainly mimicable in D, but it may be more effective to model visitation with output ranges, which would again allow composition with existing Phobos constructs. This is what I will be investigating next.

# Ranges Part 1 - Basics

Iteration is one of the most common activities in programming. Few programming tasks can be accomplished without looping or recursing over some set of values, whether it be a stream of bytes from a file, elements of an array, rows in a database, or nodes in an implicitly generated graph. Programs need to iterate, and programming languages need to provide idioms for iteration.

In D, iteration is achieved through the use of ranges. Broadly speaking, a range defines a sequence of values. There are many styles of ranges. For example, some ranges lazily generate their values instead of iterating over data; some ranges are infinite; some ranges can be iterated from both ends; and some ranges allow you to jump around to any index (like arrays). For the most part, the details of how a particular range operates is up to the range implementor.

So how do you implement a range? At the most basic level, a range is just a type with three member functions: front, empty, and popFront.

• front returns the value from the front of the range.
• empty returns true if the range has been exhausted.
• popFront removes the first value from the range.

As a simple example, here is a range that counts from 0 to ‘n’.

struct Iota
{
private int m_current = 0;
private int m_target;

this(int n) { m_target = n; }
@property int front() { return m_current; }
@property bool empty() { return m_current == m_target; }
void popFront() { m_current++; }
}

(The name Iota is a Greek letter. In a tradition started by the programming language APL, it is used to represent consecutive integers in several languages)

Notice that there is no need to inherit from any sort of IRange interface. There’s nothing magic about ranges, they are just simple types with those functions defined.

To iterate over a range, you can manually call those functions using a for loop, or use the built-in foreach loop in D, which knows about ranges.

void main()
{
import std.stdio;
foreach (x; Iota(10))
writeln(x);
}

As expected, this will print out the numbers 0 through 9, one per line. Of course, writeln knows about ranges too, so writeln(Iota(10)) works as well, and will print out [0, 1, 2, 3, 4, 5, 6, 7, 8, 9], formatted like an array.

Writing functions that work with ranges is not difficult. Suppose we want to write a function to count the number of elements in a range. This can be achieved by accepting the range as a template parameter and iterating the range using popFront() explicitly, like so:

size_t count(Range)(Range r)
{
size_t n = 0;
while (!r.empty)
{
++n;
r.popFront();
}
return n;
}

unittest
{
assert(count(Iota(10)) == 10);
}

By using templates, we can create range types that are templated on other range types. For example, consider a skip range that skips every second element in another range. You could define it like this:

struct Skip(Range)
{
private Range m_range;

this(Range r) { m_range = r; }

// front and empty just forward to the sub-range.
@property auto ref front() { return m_range.front; }
@property bool empty() { return m_range.empty; }

// popFront also forwards to the sub-range, but pops off two
// elements at a time, instead of one.
void popFront()
{
m_range.popFront();
if (!m_range.empty)
m_range.popFront();
}
}

We can then use Skip in conjunction with Iota to skip through consecutive integers.

void main()
{
import std.stdio;
writeln(Skip!Iota(Iota(10))); // [0, 2, 4, 6, 8]
}

Notice that we have to redundantly specify Iota as a template parameter to Skip. This is because D doesn’t currently support type inference for template type constructors. A common workaround is to create a module-level function that wraps the constructor:

auto skip(Range)(Range r) { return Skip!Range(r); }
auto iota(int n) { return Iota(n); }

We can then use these like so:

writeln(iota(10).skip());

The above code also makes use of another D feature: uniform function call syntax, which basically means f(x) can be written as x.f() as if f were a member function of x. If you come from the C# world, you can kind of think of it as every free function being an Extension Method. We could have also written 10.iota().skip() or just plain skip(iota(10)). There’s no difference, so choose whatever you think is most readable.

The ability to combine arbitrary ranges is perhaps their most powerful feature. There’s no reason we need to stop at combining just two ranges:

writeln(iota(10)); // [0, 1, 2, 3, 4, 5, 6, 7, 8, 9];
writeln(iota(10).skip()); // [0, 2, 4, 6, 8];
writeln(iota(10).skip().skip()); // [0, 4, 8];
writeln(iota(10).skip().skip().skip()); // [0, 8];

It shouldn’t be hard to imagine the kind of flexibility and expressiveness that can be achieved once you have a large vocabulary of ranges at your disposal.

In Part 2 we’ll look at some of the different categories of ranges.

# DustMite - Code Minimizer

Yesterday I discovered DustMite, a source code minimizer tool for D.

What’s a source code minimizer? It’s a program that takes some source code, and automatically removes lines, or even whole files from the source while maintaining some invariant (e.g. it still builds, or still gives a particular output). The usual use case is to produce a minimal reproduction test case for a bug, which is exactly what I used it for.

I had a problem in a medium-sized D project where anytime I had some sort of compile-time error, such as a typo in a variable name, DMD (the D compiler), would spew out pages and pages of false errors, triggered by some other module in my codebase. The errors were completely irrelevant. This was quite annoying because it made it difficult to see the actual error. So, like a good programmer, I decided to file a bug report.

Step one in filing a good bug report is to create a minimal test case. I tried to do this manually by pulling out what seemed to be the relevant parts of the code that would reproduce the error, but I couldn’t get it to happen. I could easily and consistently reproduce the bug in the full codebase, but the project is about 10,000 lines of code, so minimizing it manually from there would be very time consuming.

I’d heard about DustMite on the forums, so I thought I’d give it a go.

Installing was easy enough.

% git clone https://github.com/CyberShadow/DustMite.git
% cd DustMite
% dmd dustmite.d dsplit.d
% ln -s ~/DustMite/dustmite ~/bin

Next step was to prepare the project for minification. All you need to do is make a full copy of the codebase, and remove any unnecessary files (e.g. object files, data files – anything not needed for reproduction).

Next you need to devise a test command. This is a command that should return 0 if the bug is still present, and non-zero otherwise. For example, suppose we had this (trivial) program:

import std.stdio;
import std.string;

string world() { return "world!"; }

void main()
{
write(hello());
write(", ");
write(world());
write("\n");
}

I’ve got an obvious bug here in that hello() is undefined. Trying to compile this gives Error: undefined identifier hello. If we wanted to minimize this then we need a command, which, when run, will return 0.

Compiling and greping for the error would be perfect for this.

% dmd test.d 2>&1 | grep -q "undefined identifier hello"
% echo \$?
0

This runs the compiler (dmd test.d), redirects the error messages to stdout (2>&1) then pipes the result (|) to grep, which searches for that string without producing output (-q = quiet). grep returns 0 if it finds anything, and 1 otherwise.

The DustMite wiki provides a list of useful test scripts.

Finally, we just run DustMite with that test command.

% dustmite testdir 'dmd test.d 2>&1 | grep -q "undefined identifier
hello"'
None => Yes
############### ITERATION 0 ################
[  0.0%] Remove [] => No
[  1.6%] Remove [0] => No (cached)
[  3.3%] Remove [01] => No
...
[ 82.4%] Remove [000001] => No (cached)
[ 88.2%] Remove [000000] => No (cached)
[ 94.1%] Remove [0000000] => No (cached)
Done in 36 tests and 10 secs and 165 ms; reduced version is in
testdir.reduced

And checking testdir.reduced/test.d confirms that the source has been reduced.

% cd testdir.reduced
% cat test.d
void main()
{
hello;
}

For my project, DustMite took 8 minutes, and reduced roughly 10,000 lines of code down to about 10, and perfectly reproduced the issue. The problem I was having trying to reproduce manually was that the repro required that you pass the files in a specific order to the compiler, and it also required an extra file that seemingly has nothing to do with the issue. I don’t think I would have ever minimized it without DustMite.