The asterisks "***" show which items have appeared on the Sybase website since the previous version of this page.

Only the latest fully-supported versions of SQL Anywhere (11.0.1, 12.0.1, 16.0 and On Demand) are shown here.

The "EBF 21788 SP60" numbers are the new SAP-specific codes associated with the build numbers "12.0.1.3894".

Just because an older version or different platform isn't "fully supported" any more doesn't mean you can't download files (or ask questions, or get help), it just means there won't be any more new Updates released.

The asterisks "***" show which items have appeared on the Sybase website since the previous version of this page.

Only the latest fully-supported versions of SQL Anywhere (11.0.1, 12.0.1, 16.0 and On Demand) are shown here.

The "EBF 21788 SP60" numbers are the new SAP-specific codes associated with the build numbers "12.0.1.3894".

Just because an older version or different platform isn't "fully supported" any more doesn't mean you can't download files (or ask questions, or get help), it just means there won't be any more new Updates released.

Wednesday, December 18, 2013

LastPlanText Returns the long text plan of the last query executed on the connection. You control the remembering of the last plan by setting the RememberLastPlan option of the sa_server_option system procedure, or using the -zp server option. See -zp database server option.

LastStatement Returns the most recently prepared SQL statement for the current connection.

The LastStatement value is set when a statement is prepared, and is cleared when a statement is dropped. Only one statement string is remembered for each connection.

If sa_conn_activity reports a non-empty value for a connection, it is most likely the statement that the connection is currently executing. If the statement had completed, it would likely have been dropped and the property value would have been cleared. If an application prepares multiple statements and retains their statement handles, then the LastStatement value does not reflect what a connection is currently doing.

When client statement caching is enabled, and a cached statement is reused, this property returns an empty string.

It's easy to assume both properties apply to the same SQL statement, and they often do... but they don't have to. Here's an example using the code from this article:

This Foxhound snapshot shows SELECT * FROM p() as the Last Statement value, but the Last Plan Text applies to a different statement, the SELECT COUNT_BIG(*) statement inside the stored procedure:

That's the difference between these two connection properties:

LastStatement shows what came from the client application,

whereas LastPlanText applies to the last query run by the server whether it came from the client or (in this case) came from within a stored procedure.

Are you SURE about that?

Question: Is it possible the behavior of LastPlanText and LastStatement in the example above is related to peculiar symptoms described in Beware The Slow Result Set Query and Hunting The Slow Result Set Query where the SELECT statement used to return a result set didn't show up at all in the procedure profiler and was fairly well hidden in the database tracing output?

Answer: Let's run a test with a stored procedure that contains time-consuming queries but doesn't return a result set at all:

The asterisks "***" show which items have appeared on the Sybase website since the previous version of this page.

Only the latest fully-supported versions of SQL Anywhere (11.0.1, 12.0.1, 16.0 and On Demand) are shown here.

The "EBF 21788 SP60" numbers are the new SAP-specific codes associated with the build numbers "12.0.1.3894".

Just because an older version or different platform isn't "fully supported" any more doesn't mean you can't download files (or ask questions, or get help), it just means there won't be any more new Updates released.

Friday, December 13, 2013

An earlier article Beware The Slow Result Set Query described how SQL Anywhere's stored procedure profiler doesn't highlight, or even report, the time spent executing SELECT statements that return result sets from stored procedures.

SQL Anywhere's Database Tracing feature does a bit better than the procedure profiler, but it can be somewhat confusing. Here's a demonstration using the same procedure as before:

Let's pretend!

You and I both know where the time is going in this trivial procedure, but let's pretend that's not the case... let's pretend this code is part of a giant application and we don't know where the time is going.

Unlike the procedure profiler, however, the Database Tracing feature captures graphical plans for all the time-consuming queries. Here's the plan for SELECT * FROM p(); it shows that the procedure call is taking all the time (Runtime 179.08 seconds):

Clearly, it can't be the procedure call itself that's taking the time, it must be something the procedure is doing... and it's only doing two things, a WAITFOR statement and a SELECT COUNT_BIG(*).

The Database Tracing Summary list doesn't show the WAITFOR statement at all, even though the procedure profiler previously reported that it took over 1,000 milliseconds. But never mind, it doesn't really matter if Database Tracing doesn't capture WAITFOR delays, what about the SELECT COUNT_BIG(*)?

Yes, there it is, line 12 in the Summary list shows that the SELECT COUNT_BIG(*) was executed twice (huh? oh, never mind) for a total time of 74 milliseconds.

Now, if this was a real example, and line 12 was one of many hundreds or thousands of queries in the Database Tracing Summary list, it might go unnoticed... 74 milliseconds is nothing.

It's not nothing!

It turns out that "74 milliseconds" is a complete fiction; the graphical plan for the SELECT COUNT_BIG(*) tells a whole different story:

This plan tells the real story: the Node Statistics FirstRowRunTime is 177.99 seconds, and that's backed up by the Subtree Statistics Runtime of 177.99 seconds.

Nobody runs Database Tracing on trivial applications

Here's the point of this exercise: When you're having trouble finding out exactly where the time is going in a big application, look for time-consuming stored procedure result set queries. Look at the code, and drill down into the plans captured by Database Tracing, or go Old School with the GRAPHICAL_PLAN() function.

The asterisks "***" show which items have appeared on the Sybase website since the previous version of this page.

Only the latest fully-supported versions of SQL Anywhere (11.0.1, 12.0.1, 16.0 and On Demand) are shown here.

The "EBF 21788 SP60" numbers are the new SAP-specific codes associated with the build numbers "12.0.1.3894".

Just because an older version or different platform isn't "fully supported" any more doesn't mean you can't download files (or ask questions, or get help), it just means there won't be any more new Updates released.

Monday, December 9, 2013

When using the procedure profiler in SQL Anywhere, it is easy to miss long-running queries that build the result sets returned by stored procedure calls. The code for these queries is located inside the stored procedure, but the procedure profiler reports the time spent executing them as zero.

ISQL says the SELECT * FROM p() ran for 14.39 seconds; here's what the procedure profiler says about the call to p():

14.39 seconds versus 1165 milliseconds? How can that be?

Where did the other 13.2 seconds go?

Not anywhere inside p(), at least not according to the procedure profiler:

As expected, 1163 milliseconds are reported against the WAITFOR DELAY '00:00:01' statement, but only 2 milliseconds for the SELECT... it might as well say "zero".

Maybe it's ISQL's fault

The procedure profiler doesn't look inside client applications, and that's what ISQL is: a client application running the SELECT * FROM p() statement. So let's push that SELECT down inside another stored procedure p2() and call that one from ISQL:

Friday, December 6, 2013

Fans of Pawn Stars will recognize two of the main cast members in this Microsoft commercial . . . this is the full version of the commercial, much better than the 30-second clips currently showing on TV:

The asterisks "***" show which items have appeared on the Sybase website since the previous version of this page.

Only the latest fully-supported versions of SQL Anywhere (11.0.1, 12.0.1, 16.0 and On Demand) are shown here.

The "EBF 21788 SP60" numbers are the new SAP-specific codes associated with the build numbers "12.0.1.3894".

Just because an older version or different platform isn't "fully supported" any more doesn't mean you can't download files (or ask questions, or get help), it just means there won't be any more new Updates released.

Monday, December 2, 2013

Question: How can I tell if a column is a good candidate for a CLUSTERED index?

Glib Answer: Create a clustered index on that column and see if query performance is improved.

Longer Answer: If the rows are inserted in more-or-less the same order as values in the column, and the column appears in predicates that might benefit from a clustered index on that column (such as a BETWEEN range query), then see the Glib Answer above.

Seriously... :)

In SQL Anywhere the CLUSTERED attribute is more of a performance hint than a constraint. In other words, if a clustered index already exists then SQL Anywhere will try to insert new rows in the same physical order as values in the index, but there's no guarantee. And there's REALLY no guarantee if a non-clustered index already exists and you change it with ALTER INDEX:

ALTER INDEX x ON t CLUSTERED;

In fact, that ALTER INDEX will run instantly; SQL Anywhere will not move anything around, and it's up to you to tell the truth about insert ordering if you want good performance.

The flip side is that it's really easy to run ALTER INDEX x ON t CLUSTERED statements, even in production, and see if performance improves; you can always change it back with this:

ALTER INDEX x ON t NONCLUSTERED;

In other words, the Glib Answer ain't so glib after all.

Answer the question!

Sometimes a column is "somewhat clustered", and you'd like to know "how clustered". If the rows are perfectly ordered according to the values in some other column, you can use that fact to check the candidate column; e.g., a PRIMARY KEY DEFAULT AUTOINCREMENT column will perfectly define the row order if the DEFAULT is always used when inserting.

Here's some code showing how to compare the ordering of a candidate column with a DEFAULT AUTOINCREMENT column:

The WITH clause on lines 9 through 11 creates a local view called ordered_t. That view deals with the fact there may be gaps between adjacent values of t.pkey. It does that by calling ROW_NUMBER() to replace t.pkey with the sequence 1, 2, 3 with no gaps.

The FROM clause on lines 14 through 17 joins two copies of the ( SELECT * FROM ordered_t ) view, matching each row in one copy with the next (adjacent) row in the other copy.

The WHERE clause on line 18 looks for candidate_column values that are out of order in those adjacent rows.

The SELECT on lines 12 through 18 displays how many rows are out of order, and the UNION ALL SELECT on lines 19 through 22 shows the total row count.

The result shows that a quite a few rows are out of order (1 out of 7) so maybe a CLUSTERED index won't help.

Let's try one

Here's a range query that was tested with no index on candidate_column, with a non-clustered index, and with a clustered index. Each test was run twice, with and without capturing a graphical plan, and the database was restarted before each test to eliminate effects of the cache:

SELECT COUNT(*)
FROM t
WHERE candidate_column BETWEEN CURRENT DATE - 1000
AND CURRENT DATE - 900;

The second column shows the FirstRowRunTime in seconds reported in the graphical plans, and the third column shows what ISQL reported when run without capturing plans.
The two columns are consistent, showing that

the non-clustered index shaved almost 50% off the no-index execution time, and

declaring the index as CLUSTERED reduced the time by 90%.

So... in this case, for the distribution of data in this table, for this particular query... CLUSTERED wins!

Here's a snippet from the graphical plan for the no-index test, showing that an intra-query Parallel Table Scan was used:

The plan for the non-clustered index test shows no more table scan, now it's using a Parallel Index Only Retrieval Scan; remember, it's just counting rows so it doesn't actually have to retrieve anything from the table itself:

The clustered index plan looks exactly the same as the non-clustered index plan, except it says "Clustered index yes"... and it's faster.

The asterisks "***" show which items have appeared on the Sybase website since the previous version of this page.

Only the latest fully-supported versions of SQL Anywhere (11.0.1, 12.0.1, 16.0 and On Demand) are shown here.

The "EBF 21788 SP60" numbers are the new SAP-specific codes associated with the build numbers "12.0.1.3894".

Just because an older version or different platform isn't "fully supported" any more doesn't mean you can't download files (or ask questions, or get help), it just means there won't be any more new Updates released.

Wednesday, November 27, 2013

Here's a snippet from a very curious graphical plan for a large, slow 4-way UNION in a SQL Anywhere 16 stored procedure:

The selected DT (Derived Table) node represents the 3rd SELECT statement in the UNION, and the Subtree Statistics - PercentTotalCost of 66.979% shows that this SELECT is the slowest of the four.

So, why is it curious?

It's curious because the node immediately below the DT, the JNL Nested Loop Join node, shows a Node Statistics - PercentTotalCost of zero and a Subtree Statistics - PercentTotalCost of only 8%... where did the rest of the 66.979% go?

Did it disappear somewhere between the DT and the JNL?

In an effort to find where the time was going, the third SELECT was moved out of the UNION and executed separately. Here's an even-more-curious snippet from the graphical plan for that SELECT:

The DT node has now become the SELECT node, with 100% for the Subtree Statistics - PercentTotalCost.

However, the Work node immediately underneath the SELECT node shows zero for both the Node and Subtree Statistics - PercentTotalCost.

Now we're not just looking for most of the time, we're looking for ALL the time . . .

Where did it go?

It turns out there's more to this query than one giant SELECT FROM... there are a bunch of subqueries in the SELECT list, and one of them is consuming . . . wait for it . . . 98.88% of the time.

To see the subqueries in the SQL Anywhere Plan Viewer, you have to click on the dropdown list that says "Main Query" and look at the "SubQ" entries, in this case SubQ 4:

Back in the "Main Query" display, the SubQ 4 subquery appears as the third-last item in the SELECT list:

expr166[EXISTS(SubQ 4) : 50% Guess,1,0]

The subquery itself isn't so bad, it only took 0.19238% of the time (the Actual(Avg) column in the Node Statistics above)... it's the 514 separate executions, one for each row in the outer SELECT, that brought the total to 98.88% (the Actual(Total) column).

The solution wasn't so hard, either... one more CREATE INDEX, and the execution time for the stored procedure call dropped from 42 seconds to 3.4 seconds.

But the solution's not the point here... the point is, just because a SELECT contains a dozen tables in the FROM clause doesn't mean that's where the time is going.

And just because the Plan Viewer doesn't shout "Hey, look at this slow subquery!" doesn't mean you shouldn't dig down and look for yourself... especially when the Main Query display doesn't account for all the time being spent.

However, knowing the "what" and "where" of query bottlenecks may not be enough, you also need to know the "why". For that, you need to see the graphical plan with statistics for each slow query... and you often can't simply copy and paste the queries into ISQL to see the plans because you won't get realistic results,

Insert the code template immediately ahead of the slow query in the stored procedure.

Copy and paste the slow query where it says [PASTE SQL STATEMENT HERE].

Change all embedded single quotes ' to be quote pairs '' (not doublequotes, but two single quotes).

References to local variables are OK; so are trailing --comments, embedded /*comments*/ and the closing semicolon.

Make sure you preserve the correct functionality of the stored procedure. In particular, make sure it's OK to execute the query twice (once by the GRAPHICAL_PLAN() function, and once by the original SQL statement). In this case, the "INTO #temp" must be removed from the query passed to GRAPHICAL_PLAN() for two reasons; first, so multiple rows aren't inserted to the temporary table by two query executions, and second, because GRAPHICAL_PLAN() fails with SQLCODE=-894 for a query that contains an INTO #temp clause.

You may choose to pass only portions of a slow query to GRAPHICAL_PLAN(); for example, the SELECT portion of a DELETE SELECT, or one of many SELECT statements in a large UNION.

You may also choose to code multiple CALL xp_write_file statements, with different *.saplan file names, for multiple slow queries or separate portions of one slow query.

Here's what the slow() procedure looks like after the code's been added:

Wednesday, November 20, 2013

The description of the new READ_SERVER_FILE() procedure introduced in SQL Anywhere 16.0.0.1675 leads one to ask the question, "What makes READ_SERVER_FILE() different from the good old xp_read_file()?

One obvious difference is that READ_SERVER_FILE() lets you specify a substring of the file while xp_read_file() doesn't.

Another difference is that xp_read_file() has a mysterious "lazy" parameter that was introduced in Version 12 of SQL Anywhere:

Enhancements to the xp_read_file system procedure
The xp_read_file system procedure now includes an optional parameter that allows you to specify lazy reads. When you specify this optional parameter and its value is not zero, the file is read and then immediately unlocked.

Friday, November 8, 2013

Since the SQL Anywhere 16.0 "General Availability" build 1324 was released, there have been a number of EBFs that contained new features as well as fixes. Chances are, this trend will continue, and there won't be any interim "16.1" or "16.0.1" releases before Version 17 is released (which won't be happening any time soon).

That means if you want to know What's New in SQL Anywhere 16 since it first came out, you have to read the "read me" file for the latest EBF... or read this excerpt, where some the more interesting entries have been highlighted)...

MobiLink - Java Plugin for Sybase Central
================(Build #1453 - Engineering Case #733180)================
In the MobiLink plug-in, the popup menu for a synchronization model now contains
a new item, “Duplicate”. This item creates a copy of the synchronization
model in the same project. The name the user provides is used for the name
of the copy, as well as the script version and publication name values of
the new synchronization model. This feature is useful, when there is a working
synchronization system and a copy of it is required as a starting place for
making the next version of the system.

================(Build #1451 - Engineering Case #733174)================
When the test window in the MobiLink plug-in is opened, it first deploys
the synchronization model to the consolidated database and to a newly created
remote database. In the past, changes were made directly to the databases
to prepare for synchronization. This behavior is now changed so that SQL
files are generated containing the changes to be made. The SQL files are
then automatically applied to the databases. This is consistent with the
way deployment is handled when the deployment wizard is used.
This should result in no user visible change in behavior, but it will ensure
that going forward behavior seen when testing a synchronization model in
the test window is consistent with that seen when the model is actually deployed
using the deployment wizard.

MobiLink - Relay Server
================(Build #1453 - Engineering Case #732958)================
The existing affinity flag in the Relay Server Record has been extended to
carry a value of ‘x’ when the Relay Server told the client to expire the
affinity cookie. This can be useful for troubleshooting.

MobiLink - iAS Branded ODBC Drivers
================(Build #1484 - Engineering Case #735343)================
The MobiLink server now supports consolidated databases running on Sybase
IQ 16.0 servers. For the recommended ODBC drivers for Windows and Linux,
please visit the following link:
http://www.sybase.com/detail?id=1011880
The Row Level Versioning (RLV) feature introduced in Sybase IQ 16.0 has
removed the “single-writer” limitation. Therefore the IQ 16.0 server now
allows multiple connections modifying a RLV enabled table concurrently. Based
on testing, the upload would be ten times faster for synchronizations with
RLV enabled tables than with RLV disabled tables. Therefore, in order to
get better upload performance, all sync tables are recommended to be RLV
enabled. However, if there is any table that cannot be RLV enabled, for
instance a sync table that contains BLOBs and/or foreign keys, the upload
phase must be serialized. This requirement can be achieved, if the begin_upload
connection script is written to include or to use the following SQL statement:
LOCK TABLE table_name IN WRITE MODE WAIT time_string
where table_name is the name of a table that is defined on the IQ store
and the time_string gives the maximum time period to lock the table. The
table can be as simple as the one defined as:
create table coordinate_upload ( c1 int )
It is not required to have any data. If any of the other MobiLink server
transactions is required to modify any IQ tables, all of these transactions
must be serialized as well. The same logic mentioned above can be used.
This technique is considered more efficient than retries on each of the transactions
by the MobiLink server.

SQL Anywhere - OData Server
================(Build #1648 - Engineering Case #746461)================
The OData Producer now respects Content-Encoding and Accept-Encoding HTTP
request headers as specified by the HTTP 1.1 spec:
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
The Content-Encoding header is used by clients to indicate the encoding
of the request body. The Accept-Encoding header is used by clients to indicate
the preferred encoding of the response body.

SQL Anywhere - Other
================(Build #1691 - Engineering Case #749465)================
Previously, an Oracle JRE was shipped with the software for use by clients.
Now, the SAP JRE is shipped instead.
Upgrading overwrites the JRE directory (%SQLANY16%\binXX\jre170) and its
subdirectories. If you are using certificates, then your certificate store
(%SQLANY16%\binXX\jre170\lib\security\cacerts) is overwritten, including
your certificates. Similarly, fonts you added to the %SQLANY16%\binXX\jre170\lib\fonts\fallback
directory to help display characters in the administration tools may be lost.
To minimize upgrading steps with regards to the JRE change, create a backup
copy of the JRE directory and all of its subdirectories before you upgrade
so that you can refer to or restore files (such as cacerts) from the backup,
as needed. To restore settings, use the java_vm_options option (SQL Anywhere),
and/or the -sl java option (MobiLink) to optimize your Java VM startup settings.

================(Build #1670 - Engineering Case #749256)================
Strong encryption now achieved using OpenSSL
--------------------------------------------
Prior to this change, SQL Anywhere included a Certicom encryption module
that provided strong encryption used throughout the software. Now, SQL Anywhere
includes an OpenSSL encryption module for the strong encryption. The Certicom
encryption module has been removed.
Read the following descriptions to determine how you may be impacted by
this change.
FIPS encryption now requires the private key of an identity file to be encrypted
using AES
- OpenSSL FIPS supports AES encryption for the private key of an identity
file. New servers using the OpenSSL FIPS encryption module will not start
when using an identity file that has its private key encrypted with 3DES.
You must re-encrypt the identity file using AES. To do this, run a command
similar to the following using an upgraded viewcert utility:
viewcert -p -o new-file-name -op new-password -ip old-password old-file-name
The new and old passwords can be the same.
- The sample server identity file (rsaserver.id) and client identity file
(rsaclient.id) have been modified so that the private keys are encrypted
using AES rather than 3DES.
- Versions of the server that use the Certicom encryption module will not
start when using an identity file that has its private key encrypted using
AES. Trusted root certificate files specified using trusted_certificates
do not need to be modified.
Self-signed certificates must now have the Certificate Signing attribute
set
Self-signed certificates must now have the Certificate Signing attribute
set when using the identity encryption option (for example, the -x mlsrvXX
and -xs dbsrvXX options). To determine if a certificate has the Certificate
Signing attribute set, use the viewcert utility and look for the Certificate
Signing attribute in the Key Usage portion of the output. If your self-signed
certificates do not have the Certificate Signing attribute set, then you
must regenerate the certificates.
Create Certificate utility (createcert) now uses AES encryption instead
of 3DES
The Create Certificate utility (createcert) now uses AES rather than 3DES
encryption for encrypting the private key in the server identity file.
A new option, -3des, has been added to the Create Certificate utility. Use
this option when you want to create a 3DES-encrypted server identity file
that can be used by both new and old servers. Note that new servers running
in FIPS mode cannot start using 3DES-encrypted certificates; however, if
you are not running in FIPS mode, then you can use 3DES-encrypted certificates.
View Certificate utility (viewcert) now uses AES encryption instead of 3DES
The View Certificate utility (viewcert) now uses AES rather than 3DES encryption
when you specify the -p option to PEM-encode the output and when you specify
the -ip and -op options to set the password.
A new option, -3des, has been added to the View Certificate utility to allow
you encrypt output and passwords using 3DES instead of AES.
Database server now loads the FIPS driver file, dbfipsXX.dll, at startup
Previously, the 32-bit Windows database server loaded the FIPS driver file,
dbfipsXX.dll, only when needed. Now, the 32-bit Windows database server always
attempts to load dbfipsXX.dll at startup, and keeps it loaded for the life
of the server. If loading dbfipsXX.dll fails, then an error is returned only
when an attempt is made to use FIPS encryption.
Deploying FIPS
If you are deploying FIPS encryption, then there are new shared libraries
to deploy; these files are included in your software. The former files, sbgse2.dll
and libsbgse2.so, are no longer installed by the software. The new files
to deploy are:
- Windows 64-bit: libeay32.dll, ssleay32.dll, and msvcr100.dll
- Windows 32-bit: libeay32.dll, ssleay32.dll, and msvcr90.dll
- Linux: libcrypto.so and libssl.so
Note: On Windows, although 32-bit and 64-bit FIPS-certified OpenSSL libraries
for encryption are provided, you must use the 64-bit libraries on a 64-bit
system.
MobiLink-related changes and information
Connecting to a MobiLink server using client-side certificates now requires
the Digital Signature certificate attribute to be set
TLS/SSL connections to a MobiLink server using client-side certificates
now require the client-side certificate to have the Digital Signature attribute
set. If the attribute is not set, then the connection will fail.
To determine if a certificate has the Digital Signature attribute set, use
the View Certificate utility (viewcert) and look for the Digital Signature
attribute in the Key Usage portion of the output. If your client-side certificates
do not have the Digital Signature attribute set, then you must regenerate
the certificates.
FIPS-based end-to-end encryption now requires the private key to be encrypted
using AES
If the private key file provided to a MobiLink server by the e2ee_private_key
file option of the –x command-line option is encoded using 3DES and you are
running in FIPS mode, then the private key file needs to be regenerated with
the private key encrypted using AES.
How to update a MobiLink deployment that uses non-FIPS TLS/SSL (includes
HTTPS) and client-side certificates
1. If your client-side identity certificates do not have the Digital Signature
attribute set and the client connects directly to the MobiLink server, then
you must regenerate and deploy client-side certificates with the Digital
Signature attribute set.
2. Update the server-side binaries.
3. Update the client-side binaries.
How to update a MobiLink deployment that uses FIPS, TLS/SSL (includes HTTPS)
and client-side certificates
These steps update the client identity certificates twice if the Digital
Signature attribute is missing from client-side identity certificates. This
procedure can make the update less disruptive because synchronizations can
continue without having to coordinate the client-side and server-side updates
to occur at the same time.
1. If your current client-side identity certificates do not have the Digital
Signature attribute set and the client connects directly to the MobiLink
server, then you must regenerate and deploy client-side certificates with
the Digital Signature attribute set.
2. Update the server-side binaries (remembering to include the new FIPS
driver files) and deploy server identity certificates with AES-encrypted
private keys.
3. Update the client-side binaries (remembering to include the new FIPS
driver files) and deploy client identity certificates with AES-encrypted
private keys.
How to update a MobiLink deployment that uses FIPS and end-to-end encryption
1. Regenerate the primary key file referenced by the e2ee_private_key encryption
option.
2. Shut down the MobiLink server.
3. Update the MobiLink server binaries, remembering to include the new required
FIPS driver files.
4. Change the e2ee_private_key option to point to the new private key file
(or replace the old file), updating the e2ee_private_key_password, if required.
5. Restart the MobiLink server.

SQL Anywhere - Server
================(Build #1688 - Engineering Case #747805)================
For Syntax 2 of the DELETE statement and Syntax 2 of the UPDATE statement
the error detection behaviour of the server has been improved. These two
syntax forms allow an additional FROM clause that may contain the table-name
of the updated or deleted table, for example:
DELETE
FROM [owner.]table_1 [ [ AS ] correlation-name ]
FROM [owner.]table_1 [ [ AS ] correlation-name ] ...
WHERE ...
and
UPDATE [owner.]table_1 [ [ AS ] correlation-name ]
SET columns_1 = ...
FROM [owner.]table_1 [ [ AS ] correlation-name ] ...
WHERE ...
If the DELETE or UPDATE clause and the additional FROM clause have a table
reference that contains the same table name, in the above example "table_1",
then the server can only decide whether both are identical table references
if one of the following conditions is true:
- both table references are not qualified by specifying a user ID
- both table references are qualified by specifying a user ID
- both table references are specified with a correlation name
In cases where the server cannot decide whether the above table references
are identical or not it will now return an SQL error to prevent the user
from unintended semantics like deleting and updating to many rows.

================(Build #1675 - Engineering Case #747798)================
A new system function has been added, READ_SERVER_FILE(). This function reads
data from a specified file on the server and returns the full or partial
contents of the file as a LONG BINARY value.
Syntax:
READ_SERVER_FILE( filename ) [, start [ , length] ]
Parameters:
- filename LONG VARCHAR value indicating the path and name of the file
on the server.
- start The start position of the file to read, in bytes. The first byte in
the file is at position 1. A negative starting position specifies
the number of bytes from the end of the file rather than from the
beginning.
* If start is not specified, a value of 0 is used.
* If start is zero and length is non-negative, a start value of 1
is used.
* If start is zero and length is negative, a start value of -1 is
used.
- length The length of the file to read, in bytes.
* If length is not specified, the function reads from the starting
position to the end of the file.
* If length is positive, the function read ends length bytes to the
right of the starting position.
* If length is negative, the function returns at most length bytes
up to, and including, the starting position, from the left of the
starting position.
Returns:
LONG BINARY
Remarks:
This function returns the full or partial (if start and/or length are specified)
contents of the named file as a LONG BINARY value. If the file does not
exist or cannot be read, NULL is returned.
filename is relative to the starting directory of the database server.
The READ_SERVER_FILE function supports reading files larger than 2GB. However,
the returned content is limited to 2GB. If the returned content exceeds
this limit, a SQL error is returned.
If the data file is in a different character set, you can use the CSCONVERT
function to convert it. You can also use the CSCONVERT function to address
the character set conversion requirements you may have when using the
READ_SERVER_FILE server function.
If disk sandboxing is enabled, the file referenced in filename must in an
accessible location.
Privileges:
When reading from a file on a client computer:
* You must have the READ FILE system privilege.
* You must have read permissions on the directory being read from.
Standards: SQL/2008 Vendor extension.
Example:
The following statement reads 20 bytes in a file, starting from byte 100
of the file.
SELECT READ_SERVER_FILE( 'c:\\data.txt', 100, 20 )
See also
* xp_read_file system procedure
* CSCONVERT function [String]
* Disk sandboxing

================(Build #1674 - Engineering Case #747277)================
A new database property, BackupInProgress, has been added. Querying the property
will return 'on' when there is a backup happening, and 'off' otherwise.

================(Build #1673 - Engineering Case #747205)================
The geospatial method ST_BUFFER is now supported for all geometry types.
This method is compatible with the SQL/MM and OGC standards. ST_BUFFER returns
the ST_Geometry value that represents all points whose distance from any
point of an ST_Geometry value is less than or equal to a specified distance
in the given units.
ST_GEOMETRY::ST_BUFFER( distance double, unit_name long varchar )
- distance: The distance the buffer should be from the geometry value.
Must be greater than or equal to 0.
- unit_name: The units in which the distance parameter should be interpreted.
Defaults to the unit of the spatial reference system. The unit name must
match the UNIT_NAME column of a row in the ST_UNITS_OF_MEASURE view where
UNIT_TYPE is 'LINEAR'.
- Returns the ST_Geometry value representing all points within the specified
distance of the original geometry.
The ST_Buffer method generates a geometry that expands a geometry by the
specified distance. This method can be used, for example, to find all points
in geometry A that are within a specified distance of geometry B. The distance
parameter must be a positive value. This method will return an error if
distance is negative. If the distance parameter is equal to 0, the original
geometry is returned. The ST_Buffer method is best used only when the actual
buffer geometry is required. Determining whether two geometries are within
a specified distance of each other should be done using ST_WithinDistance
instead.

================(Build #1665 - Engineering Case #746935)================
The dbo.sp_list_directory() stored procedure can be used to obtain information
about directories and files that are accessible to the SQL Anywhere Server.
Currently the sp_list_directory() procedure returns the following three columns:
file_path long nvarchar the path of the server accessible file or directory
file_type nvarchar(1) either F for file or D for directory
file_size unsigned bigint the size of the file or NULL for directories
In order to provide more information about the various files and directories,
dbo.sp_list_directory() has now been enhanced to return five additional columns.
These five additional columns are:
owner nvarchar(128) the owner of the file or directory
create_date_time* timestamp with time zone the date and time the file or
directory was created
modified_date_time* timestamp with time zone the date and time the file
or directory was last modified
access_date_time* timestamp with time zone the date and time the file or
directory was last accessed
permissions varchar(10) the set of access permissions for the file or
directory
All other aspects of dbo.sp_list_directory() – including the set of system
privileges and secure feature privileges – remain unchanged.
A database either has to be upgraded or initialized in order for applications
to obtain this new information from dbo.sp_list_directory(). In addition,
if an upgraded or newly initialized database is subsequently moved to an
older version of the server, then the new columns will continue to be returned
but the values of the new columns will be NULL.

================(Build #1614 - Engineering Case #744027)================
The SQL Anywhere PHP External Environment supports several versions of the
PHP interpreter. The SQL Anywhere install bundle includes a separate PHP
external environment dll or shared object for each supported version of PHP.
In addition, whenever support for a new version of the PHP interpreter is
added, the SQL Anywhere install bundle is updated to include the new PHP
external environment dll or shared object for the new version of the PHP
interpreter. Going forward, the SQL Anywhere install bundle will no longer
be updated with additional PHP external environment dlls or shared objects
when support for new versions of the PHP interpreter are added. Instead,
the new PHP external environment dlls and shared objects will now only be
available on the download site.

================(Build #1537 - Engineering Case #737497)================
Previously, the CREATE INDEX statement for local temporary tables on read-only
nodes had been disallowed. This has been changed, and now local temporary
tables are the only tables where index creation is allowed on the read-only
databases.

================(Build #1473 - Engineering Case #734038)================
The database property TimeWithoutClientConnection has been added.
The description for this database property is:
Returns the elapsed time in seconds since a CmdSeq or TDS client connection
to the database existed. If there has not been a CmdSeq or TDS connection
since the database started then the time since the database started is returned.
If one or more CmdSeq or TDS connections are currently connected, 0 is returned.

SQL Anywhere - Sybase Central Plug-in
================(Build #1537 - Engineering Case #739081)================
Inherited object privileges can now be viewed for any table, view, procedure,
function, sequence generator, or dbspace via the “Privileges” tabs.
Also, inherited object privileges can now be viewed for any user or role
via the “Table Privileges”, “View Privileges”, “Procedure Privileges”, “Sequence
Privileges”, and “Dbspace Privileges” tabs.
In both cases, a new “Show Inherited” check box has been added to the tabs.
With the check box checked, the tabs show privileges that are inherited through
role inheritance, in addition to privileges that are granted explicitly.