1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
|
<html><head><title>The design of toybox</title></head>
<!--#include file="header.html" -->
<b><h2>Design goals</h2></b>
<p>Toybox should be simple, small, fast, and full featured. Often, these
things need to be balanced off against each other. In general, keeping the
code simple the most important (and hardest) goal, and small is slightly more
important than fast. Features are the reason we write code in the first
place but this has all been implemented before so if we can't do a better
job why bother? It should be possible to get 80% of the way to each goal
before they really start to fight.</p>
<p>Here they are in reverse order of importance:</p>
<b><h3>Features</h3></b>
<p>The <a href=roadmap.html>roadmap</a> has the list of features we're
trying to implement, and the reasons for them. After the 1.0 release
some of that material may get moved here.</p>
<p>Some things are simply outside the scope of the project: even though
posix defines commands for compiling and linking, we're not going to include
a compiler or linker (and support for a potentially infinite number of hardware
targets). And until somebody comes up with a ~30k ssh implementation, we're
going to point you at dropbear or polarssl.</p>
<p>Environmental dependencies are a type of complexity, so needing other
packages to build or run is a big downside. For example, we don't use curses
when we can simply output ansi escape sequences and trust all terminal
programs written in the past 30 years to be able to support them. (A common
use case is to download a statically linked toybox binary to an arbitrary
Linux system, and use it in an otherwise unknown environment; being
self-contained helps support this.)</p>
<b><h3>Speed</h3></b>
<p>It's easy to say lots about optimizing for speed (which is why this section
is so long), but at the same time it's the optimization we care the least about.
The essence of speed is being as efficient as possible, which means doing as
little work as possible. A design that's small and simple gets you 90% of the
way there, and most of the rest is either fine-tuning or more trouble than
it's worth (and often actually counterproductive). Still, here's some
advice:</p>
<p>First, understand the darn problem you're trying to solve. You'd think
I wouldn't have to say this, but I do. Trying to find a faster sorting
algorithm is no substitute for figuring out a way to skip the sorting step
entirely. The fastest way to do anything is not to have to do it at all,
and _all_ optimization boils down to avoiding unnecessary work.</p>
<p>Speed is easy to measure; there are dozens of profiling tools for Linux
(although personally I find the "time" command a good starting place).
Don't waste too much time trying to optimize something you can't measure,
and there's no much point speeding up things you don't spend much time doing
anyway.</p>
<p>Understand the difference between throughput and latency. Faster
processors improve throughput, but don't always do much for latency.
After 30 years of Moore's Law, most of the remaining problems are latency,
not throughput. (There are of course a few exceptions, like data compression
code, encryption, rsync...) Worry about throughput inside long-running
loops, and worry about latency everywhere else. (And don't worry too much
about avoiding system calls or function calls or anything else in the name
of speed unless you are in the middle of a tight loop that's you've already
proven isn't running fast enough.)</p>
<p>"Locality of reference" is generally nice, in all sorts of contexts.
It's obvious that waiting for disk access is 1000x slower than doing stuff in
RAM (and making the disk seek is 10x slower than sequential reads/writes),
but it's just as true that a loop which stays in L1 cache is many times faster
than a loop that has to wait for a DRAM fetch on each iteration. Don't worry
about whether "&" is faster than "%" until your executable loop stays in L1
cache and the data access is fetching cache lines intelligently. (To
understand DRAM, L1, and L2 cache, read Hannibal's marvelous ram guide at Ars
Technica:
<a href=http://arstechnica.com/paedia/r/ram_guide/ram_guide.part1-2.html>part one</a>,
<a href=http://arstechnica.com/paedia/r/ram_guide/ram_guide.part2-1.html>part two</a>,
<a href=http://arstechnica.com/paedia/r/ram_guide/ram_guide.part3-1.html>part three</a>,
plus this
<a href=http://arstechnica.com/articles/paedia/cpu/caching.ars/1>article on
cacheing</a>, and this one on
<a href=http://arstechnica.com/articles/paedia/cpu/bandwidth-latency.ars>bandwidth
and latency</a>.
And there's <a href=http://arstechnica.com/paedia/index.html>more where that came from</a>.)
Running out of L1 cache can execute one instruction per clock cycle, going
to L2 cache costs a dozen or so clock cycles, and waiting for a worst case dram
fetch (round trip latency with a bank switch) can cost thousands of
clock cycles. (Historically, this disparity has gotten worse with time,
just like the speed hit for swapping to disk. These days, a _big_ L1 cache
is 128k and a big L2 cache is a couple of megabytes. A cheap low-power
embedded processor may have 8k of L1 cache and no L2.)</p>
<p>Learn how virtual memory and memory managment units work. Don't touch
memory you don't have to. Even just reading memory evicts stuff from L1 and L2
cache, which may have to be read back in later. Writing memory can force the
operating system to break copy-on-write, which allocates more memory. (The
memory returned by malloc() is only a virtual allocation, filled with lots of
copy-on-write mappings of the zero page. Actual physical pages get allocated
when the copy-on-write gets broken by writing to the virtual page. This
is why checking the return value of malloc() isn't very useful anymore, it
only detects running out of virtual memory, not physical memory. Unless
you're using a NOMMU system, where all bets are off.)</p>
<p>Don't think that just because you don't have a swap file the system can't
start swap thrashing: any file backed page (ala mmap) can be evicted, and
there's a reason all running programs require an executable file (they're
mmaped, and can be flushed back to disk when memory is short). And long
before that, disk cache gets reclaimed and has to be read back in. When the
operating system really can't free up any more pages it triggers the out of
memory killer to free up pages by killing processes (the alternative is the
entire OS freezing solid). Modern operating systems seldom run out of
memory gracefully.</p>
<p>Also, it's better to be simple than clever. Many people think that mmap()
is faster than read() because it avoids a copy, but twiddling with the memory
management is itself slow, and can cause unnecessary CPU cache flushes. And
if a read faults in dozens of pages sequentially, but your mmap iterates
backwards through a file (causing lots of seeks, each of which your program
blocks waiting for), the read can be many times faster. On the other hand, the
mmap can sometimes use less memory, since the memory provided by mmap
comes from the page cache (allocated anyway), and it can be faster if you're
doing a lot of different updates to the same area. The moral? Measure, then
try to speed things up, and measure again to confirm it actually _did_ speed
things up rather than made them worse. (And understanding what's really going
on underneath is a big help to making it happen faster.)</p>
<p>In general, being simple is better than being clever. Optimization
strategies change with time. For example, decades ago precalculating a table
of results (for things like isdigit() or cosine(int degrees)) was clearly
faster because processors were so slow. Then processors got faster and grew
math coprocessors, and calculating the value each time became faster than
the table lookup (because the calculation fit in L1 cache but the lookup
had to go out to DRAM). Then cache sizes got bigger (the Pentium M has
2 megabytes of L2 cache) and the table fit in cache, so the table became
fast again... Predicting how changes in hardware will affect your algorithm
is difficult, and using ten year old optimization advice and produce
laughably bad results. But being simple and efficient is always going to
give at least a reasonable result.</p>
<p>The famous quote from Ken Thompson, "When in doubt, use brute force",
applies to toybox. Do the simple thing first, do as little of it as possible,
and make sure it's right. You can always speed it up later.</p>
<b><h3>Size</h3></b>
<p>Again, simple gives you most of this. An algorithm that does less work
is generally smaller. Understand the problem, treat size as a cost, and
get a good bang for the byte.</p>
<p>Understand the difference between binary size, heap size, and stack size.
Your binary is the executable file on disk, your heap is where malloc() memory
lives, and your stack is where local variables (and function call return
addresses) live. Optimizing for binary size is generally good: executing
fewer instructions makes your program run faster (and fits more of it in
cache). On embedded systems, binary size is especially precious because
flash is expensive (and its successor, MRAM, even more so). Small stack size
is important for nommu systems because they have to preallocate their stack
and can't make it bigger via page fault. And everybody likes a small heap.</p>
<p>Measure the right things. Especially with modern optimizers, expecting
something to be smaller is no guarantee it will be after the compiler's done
with it. Binary size isn't the most accurate indicator of the impact of a
given change, because lots of things get combined and rounded during
compilation and linking. Matt Mackall's bloat-o-meter is a python script
which compares two versions of a program, and shows size changes in each
symbol (using the "nm" command behind the scenes). To use this, run
"make baseline" to build a baseline version to compare against, and
then "make bloatometer" to compare that baseline version against the current
code.</p>
<p>Avoid special cases. Whenever you see similar chunks of code in more than
one place, it might be possible to combine them and have the users call shared
code. (This is the most commonly cited trick, which doesn't make it easy. If
seeing two lines of code do the same thing makes you slightly uncomfortable,
you've got the right mindset.)</p>
<p>Some specific advice: Using a char in place of an int when doing math
produces significantly larger code on some platforms (notably arm),
because each time the compiler has to emit code to convert it to int, do the
math, and convert it back. Bitfields have this problem on most platforms.
Because of this, using char to index a for() loop is probably not a net win,
although using char (or a bitfield) to store a value in a structure that's
repeated hundreds of times can be a good tradeoff of binary size for heap
space.</p>
<b><h3>Simple</h3></b>
<p>Complexity is a cost, just like code size or runtime speed. Treat it as
a cost, and spend your complexity budget wisely. (Sometimes this means you
can't afford a feature because it complicates the code too much to be
worth it.)</p>
<p>Simplicity has lots of benefits. Simple code is easy to maintain, easy to
port to new processors, easy to audit for security holes, and easy to
understand.</p>
<p>Simplicity itself can have subtle non-obvious aspects requiring a tradeoff
between one kind of simplicity and another: simple for the computer to
execute and simple for a human reader to understand aren't always the
same thing. A compact and clever algorithm that does very little work may
not be as easy to explain or understand as a larger more explicit version
requiring more code, memory, and CPU time. When balancing these, err on the
side of doing less work, but add comments describing how you
could be more explicit.</p>
<p>In general, comments are not a substitute for good code (or well chosen
variable or function names). Commenting "x += y;" with "/* add y to x */"
can actually detract from the program's readability. If you need to describe
what the code is doing (rather than _why_ it's doing it), that means the
code itself isn't very clear.</p>
<p>Prioritizing simplicity tends to serve our other goals: simplifying code
generally reduces its size (both in terms of binary size and runtime memory
usage), and avoiding unnecessary work makes code run faster. Smaller code
also tends to run faster on modern hardware due to CPU cacheing: fitting your
code into L1 cache is great, and staying in L2 cache is still pretty good.</p>
<p><a href=http://www.joelonsoftware.com/articles/fog0000000069.html>Joel
Spolsky argues against throwing code out and starting over</a>, and he has
good points: an existing debugged codebase contains a huge amount of baked
in knowledge about strange real-world use cases that the designers didn't
know about until users hit the bugs, and most of this knowledge is never
explicitly stated anywhere except in the source code.</p>
<p>That said, the Mythical Man-Month's "build one to throw away" advice points
out that until you've solved the problem you don't properly understand it, and
about the time you finish your first version is when you've finally figured
out what you _should_ have done. (The corrolary is that if you build one
expecting to throw it away, you'll actually wind up throwing away two. You
don't understand the problem until you _have_ solved it.)</p>
<p>Joel is talking about what closed source software can afford to do: Code
that works and has been paid for is a corporate asset not lightly abandoned.
Open source software can afford to re-implement code that works, over and
over from scratch, for incremental gains. Before toybox, the unix command line
has already been reimplemented from scratch several times in a row (the
original AT&T Unix command line in assembly and then in C, the BSD
versions, the GNU tools, BusyBox...) but maybe toybox can do a better job. :)</p>
<p>P.S. How could I resist linking to an article about
<a href=http://blog.outer-court.com/archive/2005-08-24-n14.html>why
programmers should strive to be lazy and dumb</a>?</p>
<b><h2>Portability issues</h2></b>
<b><h3>Platforms</h3></b>
<p>Toybox should run on Android (all commands with musl-libc, as large a subset
as practical with bionic), and every other hardware platform Linux runs on.
Other posix/susv4 environments (perhaps MacOS X or newlib+libgloss) are vaguely
interesting but only if they're easy to support; I'm not going to spend much
effort on them.</p>
<p>I don't do windows.</p>
<b><h3>32/64 bit</h3></b>
<p>Toybox should work on both 32 bit and 64 bit systems. By the end of 2008
64 bit hardware will be the new desktop standard, but 32 bit hardware will
continue to be important in embedded devices for years to come.</p>
<p>Toybox relies on the fact that on any Unix-like platform, pointer and long
are always the same size (on both 32 and 64 bit). Pointer and int are _not_
the same size on 64 bit systems, but pointer and long are.</p>
<p>This is guaranteed by the LP64 memory model, a Unix standard (which Linux
and MacOS X both implement). See
<a href=http://www.unix.org/whitepapers/64bit.html>the LP64 standard</a> and
<a href=http://www.unix.org/version2/whatsnew/lp64_wp.html>the LP64
rationale</a> for details.</p>
<p>Note that Windows doesn't work like this, and I don't care.
<a href=http://blogs.msdn.com/oldnewthing/archive/2005/01/31/363790.aspx>The
insane legacy reasons why this is broken on Windows are explained here.</a></p>
<b><h3>Signedness of char</h3></b>
<p>On platforms like x86, variables of type char default to unsigned. On
platforms like arm, char defaults to signed. This difference can lead to
subtle portability bugs, and to avoid them we specify which one we want by
feeding the compiler -funsigned-char.</p>
<p>The reason to pick "unsigned" is that way we're 8-bit clean by default.</p>
<p><h3>Error messages and internationalization:</h3></p>
<p>Error messages are extremely terse not just to save bytes, but because we
don't use any sort of _("string") translation infrastructure.</p>
<p>Thus "bad -A '%c'" is
preferable to "Unrecognized address base '%c'", because a non-english speaker
can see that -A was the problem, and with a ~20 word english vocabulary is
more likely to know (or guess) "bad" than the longer message.</p>
<p>The help text might someday have translated versions, and strerror()
messages produced by perror_exit() and friends can be expected to be
localized by libc. Our error functions also prepend the command name,
which non-english speakers can presumably recognize already.</p>
<p>An enventual goal is <a href=http://yarchive.net/comp/linux/utf8.html>UTF-8</a> support, although it isn't a priority for the
first pass of each command. (All commands should at least be 8-bit clean.)</p>
<p>Locale support isn't currently a goal; that's a presentation layer issue,
X11 or Dalvik's problem.</p>
<a name="codestyle" />
<h2>Coding style</h2>
<p>The real coding style holy wars are over things that don't matter
(whitespace, indentation, curly bracket placement...) and thus have no
obviously correct answer. As in academia, "the fighting is so vicious because
the stakes are so small". That said, being consistent makes the code readable,
so here's how to make toybox code look like other toybox code.</p>
<p>Toybox source uses two spaces per indentation level, and wraps at 80
columns. (Indentation of continuation lines is awkward no matter what
you do, sometimes two spaces looks better, sometimes indenting to the
contents of a parentheses looks better.)</p>
<p>There's a space after C flow control statements that look like functions, so
"if (blah)" instead of "if(blah)". (Note that sizeof is actually an
operator, so we don't give it a space for the same reason ++ doesn't get
one. Yeah, it doesn't need the parentheses either, but it gets them.
These rules are mostly to make the code look consistent, and thus easier
to read.) We also put a space around assignment operators (on both sides),
so "int x = 0;".</p>
<p>Blank lines (vertical whitespace) go between thoughts. "We were doing that,
now we're doing this. (Not a hard and fast rule about _where_ it goes,
but there should be some.)"</p>
<p>Variable declarations go at the start of blocks, with a blank line between
them and other code. Yes, c99 allows you to put them anywhere, but they're
harder to find if you do that. If there's a large enough distance between
the declaration and the code using it to make you uncomfortable, maybe the
function's too big, or is there an if statement or something you can
use as an excuse to start a new closer block?</p>
<p>If statments with a single line body go on the same line if the result
fits in 80 columns, on a second line if it doesn't. We usually only use
curly brackets if we need to, either because the body is multiple lines or
because we need to distinguish which if an else binds to. Curly brackets go
on the same line as the test/loop statement. The exception to both cases is
if the test part of an if statement is long enough to split into multiple
lines, then we put the curly bracket on its own line afterwards (so it doesn't
get lost in the multple line variably indented mess), and we put it there
even if it's only grouping one line (because the indentation level is not
providing clear information in that case).</p>
<p>I.E.</p>
<blockquote>
<pre>
if (thingy) thingy;
else thingy;
if (thingy) {
thingy;
thingy;
} else thingy;
if (blah blah blah...
&& blah blah blah)
{
thingy;
}
</pre></blockquote>
<p>Gotos are allowed for error handling, and for breaking out of
nested loops. In general, a goto should only jump forward (not back), and
should either jump to the end of an outer loop, or to error handling code
at the end of the function. Goto labels are never indented: they override the
block structure of the file. Putting them at the left edge makes them easy
to spot as overrides to the normal flow of control, which they are.</p>
<p>When there's a shorter way to say something, we tend to do that for
consistency. For example, we tend to say "*blah" instead of "blah[0]" unless
we're referring to more than one element of blah. Similarly, NULL is
really just 0 (and C will automatically typecast 0 to anything, except in
varargs), "if (function() != NULL)" is the same as "if (function())",
"x = (blah == NULL);" is "x = !blah;", and so on.</p>
<p>The goal is to be
concise, not cryptic: if you're worried about the code being hard to
understand, splitting it to multiple steps on multiple lines is
better than a NOP operation like "!= NULL". A common sign of trying to
hard is nesting ? : three levels deep, sometimes if/else and a temporary
variable is just plain easier to read. If you think you need a comment,
you may be right.</p>
<p>Comments are nice, but don't overdo it. Comments should explain _why_,
not how. If the code doesn't make the how part obvious, that's a problem with
the code. Sometimes choosing a better variable name is more revealing than a
comment. Comments on their own line are better than comments on the end of
lines, and they usually have a blank line before them. Most of toybox's
comments are c99 style // single line comments, even when there's more than
one of them. The /* multiline */ style is used at the start for the metadata,
but not so much in the code itself. They don't nest cleanly, are easy to leave
accidentally unterminated, need extra nonfunctional * to look right, and if
you need _that_ much explanation maybe what you really need is a URL citation
linking to a standards document? Long comments can fall out of sync with what
the code is doing. Comments do not get regression tested. There's no such
thing as self-documenting code (if nothing else, code with _no_ comments
is a bit unfriendly to new readers), but "chocolate sauce isn't the answer
to bad cooking" either. Don't use comments as a crutch to explain unclear
code if the code can be fixed.</p>
<!--#include file="footer.html" -->
|