1.3 Conversion of Data Types

Implicit Conversion

Conversion between data types is automatic and implicit. Values
deposited in sets are always converted to strings and those in tables
to the data type of the table. Values deposited in rule variables are
left as character strings until they are used as something else; thus
?MYVAR = 1.5 sets ?MYVAR to the character string “1.5” not the
number.

Performing arithmetic on items causes them to be converted to
floating point. Integer arithmetic does not exist, even for adding 0
or 1 to an integer value or for adding the contents of two integer
table entries together. But Aspen SCM commands such as COLSUM and table
operations do use integer arithmetic when working on integer tables.
Large integers (greater than 10 million) are therefore best
manipulated by using integer tables; they can also be manipulated as
character strings without losing precision, but obviously this does
not support arithmetic.

If a variable is used as an internal index (pointer) to a member
of a set, it is converted to an integer.

Forced Conversion

To force the data type of a variable
explicitly, do as follows:

force to integer: use the INT function.
This rounds if the value is within 0.000001 of an integer and
truncates towards zero, hence

INT 2.1 = 2INT 1.999999 = 2INT
-2.1 = -2INT -1.999999 = -2

force to floating point: add zero:

?NUM = ?STRNUM + 0

force to a string: single quote it:

?STRNUM = '?STRNUM'

Loss of Precision on Conversion

Converting variables between data types can cause loss of
precision and may make it impossible to convert the value back to
what it was. This is most obvious for the conversion of small numbers
to character strings: Aspen SCM Expert System does not convert values to
exponential notation so if you put the result of the calculation
1.234567 / 1000000 into a string variable the result is .000001. If
you then multiply it back by 1000000 the result is 1.

Conversion of large integers (greater than
10 million) to floating point also causes loss of precision.
Performing arithmetic on them causes them to be converted to floating
point numbers which only have 7 significant digits. Thus

?NUM = 123456789

causes ?NUM to be set to the character
string 123456789;

?NUM = INT 123456789

causes ?NUM to be set to the integer value
123456789; but

?NUM = 123456789 + 0

causes ?NUM to be set to the floating point number 123456792, which
is the nearest floating point number to 123456789. The next floating
point number below this is 123456784, i.e. 5 away rather than 3,
which is why ?NUM was rounded up rather than down.