Base 3 is any weighted numbering system which uses three digits. I'll spare you an
introduction to positional numbering systems which has been countlessly reiterated when
introducing base 2.

Base 3 is traditionally known as ternary. I prefer trinary to
ternary or tertiary. My dictionary
defines ternary as "Composed of three or arranged in threes, having the base three."
while trinary is defined as "Consisting of three parts of proceeding by threes; ternary.",
and tertiary as "Third in place, order, degree, or rank".
Seems as if they're nearly synonyms, although tertiary would be more appropriate if base
2 was called secondary. Perhaps my preference stems from avid use of Perl, and
Larry Wall using trinary over ternary in
Apocalypse 3.
In fact, IIRC an old version of perlop had a quote about ternary being old-fashioned, but don't quote me on that.

Analog computers use a large range of voltages to represent data. Their
advantage is that only one wire is used to store any number.
Opamps can
perform integration, differentiation, root extraction,
multiplication/division, logarithms, anti-logs, and more. However, analog
computers are prevented from becoming mainstream because of their inherent
inaccuracy. Resistance, which occurs in all conductors except superconductors,
degradates the signal. It cannot be amplified without knowing how much loss
actually occured, and that varies with ambient temperature. Of course, this
problem doesn't exist with analog devices such as speakers because they
operate on Alternating Current, where the frequency determines the signal
rather than the amplitude. But I disgress. Analog has it's uses, but not in
precise arithmetic which modern computers require.

Digital computers are defined as using distinct voltages. The first digital computers used
ten voltages, meaning they where base 10 - or decimal. Atanasoff came up
with the idea of using two voltage levels. According to
Debate
Stirs Over Origin of Computers:

Atanasoff was thinking about computers. There were already mechanical and
analog computers. But Atanasoff thought there might be better methods of
computing. He drove from dry Iowa to a bar over the Illinois line, drank
three Scotch and waters, and had a Eureka! moment.

"That's when he figured out he could do everything in base 2," Gustafson
says. Base 2 is digital. It's 1s and 0s. Previous computers worked in base
10. "He jotted on a cocktail napkin all the basic principles of modern
computing."

(The quote is slightly inaccurate. Base 2 is indeed digital, but so is base 10.)

"A common man marvels at uncommon things; a wise man marvels at
the commonplace."

According to Third Base, e is the most optimal base when efficiency is measured as
rw where r=radix and w=width. 3 is the closest integer to e, closer than 2.
See the article for more information, including how the author used base 3 to organize
his file folders more efficiently.

5.4 DATA TYPES
The two TriINTERCAL data types are 10-trit unsigned integers and
20-trit unsigned integers. All INTERCAL syntax for distinguishing
data types is ported to these new types in the obvious way. Small
words may contain numbers from #0 to #59048, large words may contain
numbers from #0$#0 to #59048$#59048. Errors are signaled for constants
greater than #59048 and for attempts to WRITE IN numbers too large
for a given variable or array element to hold.

The 10- and 20-trit numbers are remarkably close to their 16- and 32-bit
counterparts. 16 bits store as much as 16*(log(2)/log(3)) =~ 10.0949 trits,
and 32 bits store as much as 32*(log(2)/log(3)) =~ 20.1898 trits. 64 bits
are about 40.3795 trits. On the surface, it looks as if the TriINTERCAL
programmers picked 10 and 20 because they're multiples of ten, but upon deeper
inspection this is obviously not the case. On a related note, 2^8=256, which
is quite near 3^5=243.

Base 2 word sizes are almost aways powers of 2. Following this trend,
we'll choose powers of base 3 word sizes if possible.

Suggested trit groupings:

Trits (base 3)

Digits (base 9)

Digits (base 27)

Max (decimal, 3^trits)

Name

Description

1

1/3

1/27

2

trit

Well established.

2

1

3/2

9

nit

One base-9 digit.

3

2/3

1

27

tribble

Half of a tryte, one base-27 digit.

6

2

729

tryte

Analogous to a byte.

9

3

19,683

tryte and 1/2

9 = 3^2

24

282,429,536,481

?

Good word size perhaps.

27

7,625,597,484,987

?

Seems like a good word size.

There has been much discussion about trinary digit groupings on
Slashdot | Ternary Computing Revisited,
but I believe these make the most sense. At least,
one user
agreed. A document on trinary computing says a tryte is 6 trits (e.g. two
tribbles), although 6 is not an integer power of 3. The argument is that a nibble is
1/2 byte, so a tribble should be 1/3 tryte. This document uses 6-trit trytes, e.g. 2
tribbles, 1 tribble = 1/2 tryte.

As discussed in section 3, trinary digits can be defined as
as {0,1,2} (unbalanced) or {-1,0,1}. Of these, {-1,0,1} can be defined as
{F,?,T} where ? is unknown (simultaneously T and F)--this is is known as
"Unknown-State Logic" and is covered elsewhere. However, Boolean algebra can
be mapped in other ways: {T,F,T}, where -1 and 1 are both true, only 0 is
false. This more closely represents conventional logic, but is less useful.
"Trinary Coded Binary" can refer to either.

For textual output, Unicode is used. Blocks are often partially encoded
because Unicode is aligned on hexadecimal boundaries, but if more characters
can be used, the number of trits can be extended. See RFC 4042 for UTF-9 and UTF-18.

Boolean (binary) Algebra was invented in 1854 by George Boole (1815-1864).
It's well known, whole books have been written on it, and algorithms have
been developed. This section attempts to help the world fully understand
Trinary Algebra, so it can be used more extensivily.

The output is always the same. You can't derive input from output, not
even partially. Uninteresting, never used in practice. If you want a
specific trit value, no input is needed - the wire is connected directly to a
neutral, positive, or negative rail.

The "Diff" column points to the trit's original position. ' means the trit
stayed in the same place, / points left, \ right. The inverses of these
functions are themselves except for the rotates. Some rules:

ÇA = ÈAÈA = ÇAÇÇA = ÈA
ÈÈA = ÇA
ÇÈA = A

The last rule works because these are unique-loss-none/1-to-1 functions, which means they
have inverses which are also functions.

A word about notation: [, and È are both rotate down/left. The [
was chosen as an ASCIIized È because:

012 A
120 Q

As you can see, the digits are shifted left, just as how [ closes left. Flip [ 90 degrees
and it will be similar to È. Rotate down=[, up=].

The most plentiful trinary unary functions have only two values of trits
in their function table; one is repeated. These have the effect of replacing
the trit with a specified value if it is another specified value, else
defaulting to the third specified value. The shift functions are useful for coercing trits
into bits, used with Trinary Coded Binary, but with the help of the swap, rotate,
and invert operators can create any of the partial unique loss functions.

The first trit of the if-then-else field is the value to compare the input
with; if the input is this value the second value of the ITE field is output,
else the third value is. For example, function 1223 has an ITE of
0123, so the process goes like this: a) if input is 0, output
is 1 b) else, output is 2. ITE reverse-lookup:

This document uses both Symbol/Wingding fonts which show how the trinary
operators are supposed to look, and ASCII approximations which work on all
modern platforms. Neither is sufficient on a real trinary computer, as the
ASCII characters may be used for other purposes. Unicode defines all
the necessary unary operator characters:

It's no problem to enumerate all the unary functions, because there's
only 3^3 of them. Not the case with trinary binary functions. There are
9^3 = 3^3^3 = 19,683 possible trinary binary functions. We can't possibly
list them all; most of them wouldn't even be useful or could be derived from
more basic building block functions.

One interesting thing about trinary binary functions is you can tell if
they are commutative (order of inputs doesn't matter, A ? B = B ? A) by their
function table. Here's how it works: Take the function number bits, suppose
its bits are abc,def,ghi. Write them in a 3x3 table, as rows:

a b c
d e f
g h i

Now take the columns: adg,beh,cfi. If
adg,beh,cfi =
abc,def,ghi, then the function is commutative. In order for this to occur:

b = d
c = g
f = h

(Note: the method of writing the function in rows then taking the columns
works for binary also; if you use a 2x2 table. I suppose it'll work for
higher bases also.) Another method of determining commutativity is by
swapping the inputs, leaving the outputs attached, and checking if they're
equal. In fact, the matrix method is a shortcut of the former.

If a function isn't commutative, it's probably not worth our time. So we
can downsize all the possible functions to a mere 729 commutative functions,
less than 4% of the possible functions. To do this, we remove the inputs which
are the same as other inputs, but in a different order:

9 bits down to 6. 3^9=19683, 3^6=729. 729/19689=1/27=3.7037''% of all
possible binary trinary functions. It should be obvious whether we are
talking about a abcdefghi or abcefi truth table depending on it's length.

To convert a 6-trit truth table to 9-trit, simply follow this formula:

5.5.2.1 UNARY LOGICAL OPERATORS
Let's start with AND and OR. To begin with, these can be considered
"choice" or "preference" operators, as they always return one of their
operands. AND can be described as wanting to return 0, but returning 1
if it is given no other choice, i.e., if both operands are 1. Similarly,
OR wants to return 1 but returns 0 if that is its only choice. From
this it is immediately apparent that each operator has an identity
element that "always loses", and a dominator element that "always wins".
AND and OR are commutative and associative, and each distributes
over the other. They are also symmetric with each other, in the sense
that AND looks like OR and OR looks like AND when the roles of 0 and 1
are interchanged (De Morgan's Laws). This symmetry property seems to be
a key element to the idea that these are logical, rather than arithmetic,
operators. In a three-valued logic we would similarly expect a three-
way symmetry among the three values 0, 1 and 2 and the three operators
AND, OR and (of course) BUT.
The following tritwise operations have all the desired properties:
OR returns the greater of its two operands. That is, it returns 2 if
it can get it, else it tries to return 1, and it returns 0 only if both
operands are 0. AND wants to return 0, will return 2 if it can't get
0, and returns 1 only if forced. BUT wants 1, will take 0, and tries
to avoid 2. The equivalents to De Morgan's Laws apply to rotations
of the three elements, e.g., 0 -> 1, 1 -> 2, 2 -> 0. Each operator
distributes over exactly one other operator, so the property
"X distributes over Y" is not transitive. The question of which way
this distributivity ring goes around is left as an exercise for the
student.
In TriINTERCAL programs the '@' (whirlpool) symbol denotes the unary
tritwise BUT operation. You can think of the whirlpool as drawing
values preferentially towards the central value 1. Alternatively,
you can think of it as drawing your soul and your sanity inexorably
down...
On the other hand, maybe it's best you NOT think of it that way.
A few comments about how these operators can be used. OR acts like
a tritwise maximum operation. AND can be used with tritmasks. 0's
in a mask wipe out the corresponding elements in the other operand,
while 1's let the corresponding elements pass through unchanged. 2's
in a mask consolidate the values of nonzero elements, as both 1's and
2's in the other operand yield 2's in the output. BUT can be used to
create "partial tritmasks". 0's in a mask let BUT eliminate 2's from
the other operand while leaving other values unchanged. Of course,
the symmetry property guarantees that the operators don't really
behave differently from each other in any fundamental way; the apparent
differences come from the intuitive view that a 0 trit is "not set"
while a 1 or 2 trit is "set".

Notice how A ? A = A, where ? is a preference/choice function. The
operator has no other choice but to return A. Also, since preference
functions are commutative, 01 and 10, 02 and 20, 12 and 21, are equal,
respectively. So the only outputs worth looking at are the bold ones.
The bold outputs, 01, 02, and 12 form the zoztot number (zero one,
zero two, one two). Zoztot's contain one of the middle pref, two of the
first, and zero of the third. If a is the middle preference trit, then:

pref zoztot
0ab = 00a
1ab = 1a1
2ab = a22

pref-012 is the minimum function while pref-210 is the maximum; this is
quite obvious as they prefer the highest or lowest trit. By extrapolating,
all of the preference functions can be found:

Unary gates exist within binary gates, no matter the base. Binary trinary gates have
unary trinary gates within them, just as
Binary Boolean Gates are composed of unary boolean gates.

If we write a binary trinary function's truth table in three groups of three,
each group will be a unary function. Group 0 is the unary function which operates
on B if A is 0, group 1 is the unary function which operates on B if A is 1, etc.

0 1 2
abc,def,ghi = unary operations on B if A=0,1,2

Knowing this, we can decompose unary functions to see how they operate when used
as tritmasks.

Minimum sets output to 0 if mask is, lets pass through if mask is 2, and if the mask
is 1 then 0 and 1 pass through, but 2 is changed to a 1. Similarily, maximum lets the
input pass through if the mask is 0, sets the output to 2 if the mask is, and lets 1,2
pass through but sets 0 to 1 if the mask is 1.

A trit exclusive max'd with 1, and then exclusive max'd with 1 again gives the original value,
just as (XOR A,1) XOR 1 = A. This works because the unary function ÇB
is called when one of the inputs is 1. Since it is being called on it's own output,
(XMAX A,1) XMAX 1 =
ÇÇB =
ÇÈB = B.

BUT, as explained in the TriINTERCAL manual, eliminates 2's while leaving other values unchanged,
if the mask contains a 0. 010=\[B causes 2 to be mapped to a 0. 1's in the
input always output 1's, while 2's output the other operand.

"Ahhh, what an awful dream. Ones and zeroes everywhere...[shudder] and I thought I saw a two." -- Bender
"It was just a dream, Bender. There's no such thing as two". -- Fry-- Futurama

What Fry says rings true in balanced trinary.

"There can only be one"

Positive and negative one, that is. Lack of being is zero. That is, balanced trinary
uses digits {-1,0,+1}, rather than {0,1,2}. To map unbalanced trinary to balanced,
subtract one. Because the prefix negative sign makes it longer than 0 or 1, it's often
written above the numeral as a vinculum or overscore:

_
1 0 1

Unbalanced and balanced conversion chart:

Unbalanced Balanced
0 -1
1 0
2 +1

Since HTML doesn't have overstrike, we'll use 1 for -1 instead. Optimally,
the Unicode character U+0305 COMBINING OVERLINE could be used.

Note that though TriINTERCAL considers all numbers to be unsigned,
nothing prevents the programmer from implementing arithmetic operations
that treat their operands as signed. Three's complement is one obvious
choice, but balanced ternary notation is also a possibility. This
latter is a very pretty and symmetrical system in which all 2 trits
are treated as if they had the value -1.

The table above was filled in by using the definition that ? = TF. For example, F and ?
= (F and T)(F and F) = FF = F. Substitute ? for T and F in two expressions and combine them.
Because ? is more of an extention to boolean algebra than a whole new system, binary
functions are still represented by four bits.

The toughest part of building a trinary computer system is the actual
implementation. Note that this document uses {0,1,2} for trits when discussing
theory, but when implementation is discussed {-1,0,+1} shall be used instead,
corresponding to actual voltage levels. Keep in mind -1 maps to 0, 1 to 0,
and +1 to 2. Simply using 0->0, 1->1,
and 2->-1 will not work. Subtract or add one when converting from
unbalanced to balanced to keep everything in balance.

Magnetism naturally has North, South, and unmagnetised states.
Materials respond differently
depending on if they're diamagnetic, paramagnetic, or ferromagnetic.
Diamagnetism is a phenomena all materials inherently experience, but it's
very weak. Diamagnetic materials repell both North and South magnetic flux.
Ferromagnetism
occurs when magnetic domains align, forming a temporary magnet. The magnetization
is greater than the applied magnetic field.
Paramagnetic materials have magnetization proportional to the strength of
the magnetic field applied to it.

A relay's coil normally is wound around a ferromagnetic material which
increases it's strength. The contacts themselves however are for the most
part paramagnetic. This means the COM contact is attracted to the NO contact
if there any magnetic flux radiating from the coil, no matter the direction.

The 2D2R configuration has been successfully constructed physically by myself. I
created a tritwise inverter, it worked great. However, the relays required large voltages
and where generally unpleasant to deal with.

2D2Q, that is Dual Diode/Dual Transistor, is a similar configuration but the relays are
replaced with transistors. It has not yet been tested.

The COM contact is normally connected to Neutral, but a positive voltage causes
it to be connected to North, while negative connects it to South. In this way,
the 0, 1, and 2 trits can be detected and substituted with arbitrary values.
All 27 unary functions can be created using a single bipolar relay.

A Q
0 Neutral
1 North
2 South

Bipolar relays can also be used as 1-trit demultiplexers. The input is still the coil,
but the trit on COM redirects to South/Neutral/North depending on the coil. In this way,
several unary gates can be created having an input we'll call A, and demultiplexed by
an input called B -- thus creating a binary trinary logic gate.

RSFQ Logic came up in a thread
on Slashdot. Liquor made a post which I'll quote in full:

Unfortunately,( RSFQ (Rapid Single Flux Quantum)
[rochester.edu] circuitry is beyond the scope of SPICE simulations, but this
appears to me to be a natural fit to the trinary logic paradigm.

Some circuits have already been physically built and tested - and at least
one person feels that they lend themselves to
tristate logic gates [sunysb.edu].

The basic principles are already in the category of proven technology -
ever heard of a SQUID sensor?

Josephson junctions work equally well for either positive or negative
currents - and so do magnetic flux quanta. (But this circuitry has to be
the ultimate in low-power computing - you can't get much lower discrete
amounts of energy than a single quantum of magnetic flux.)

A full-wave bridge rectifier has four diodes, all cathode-up, here denoted by a ^.
FWBR's are often used in power supplies to flip the negative half of the waveform;
AC is given on ~, and positive voltage is on +, negative on -. With trinary computers,
a FWBR makes a binary trinary gate because there is two inputs.

Great site on trinary logic and implementation by Steve Grubb. I learned
about most of the gates from here. The tutorials explain how to use trinary
logic to create useful circuits, and schematics of how to build trinary gates
are included. Slashdot | Ternary Computing Revisited, submitted by yours truly,
links to this article.