Monday, December 28, 2009

First, Foxhound has compile on Version 12, then it has to run properly against target databases created with versions 5.5 through 11.

What is Foxhound? It's a database monitor and schema display utility for SQL Anywhere databases, currently in development with its own beta planned for 2010.

Foxhound also has to recognize target databases created with version 12, and then it has to work properly with those target databases... it's one thing to get Foxhound to stop rejecting "12" as a database version number, quite another matter to get Foxhound to handle version 12's enhancements and behavior changes.

But, first things first: Foxhound compiles cleanly on the SQL Anywhere version 12 beta with no code changes, and it passes a simple smoke test.

Woohoo! That's no mean feat... virtually all of Foxhound is written in SQL...

80,000 lines of rather funky Watcom SQL, up from 60,000 lines a year ago,

There are a lot of little bugs in the underlying software, you've probably seen them and been just as irritated by them as I have. Feel free to report them on meta.stackexchange.com, but if you do, you'll probably see they've already been discussed... which is why I don't bother sweating the small stuff.

There are quite a few features that are "stubbed out" (not yet implemented) in the administration areas "admin", "mod" and "tools". The big one, IMO, is called "Download Your StackExchange Database". That's huge. SQLA will never go live without that one.

Not ever.

Allow me to elaborate: The creator of StackExchange just lost his entire blog because he didn't take responsibility for backing up his own data. SQLA is not my data, it is your data. It's one thing for Jeff Atwood to be careless with his own data, that's his problem. It would be an entirely different matter for me to take the same risks with your data. I don't care how safe the Fog Creek infrastructure is... blah blah blah cloud computing blah blah blah.

Until there is a reasonably convenient and absolutely reliable way for me to back up SQLA content, it ain't ever going live.

Here's what I really think: The backup problem will go away long before StackExchange goes live, and we'll be able to "pump up the volume" (invite more participation in SQLA) sometime in January or February.

That will happen before SQLA goes "live"... it will still say "govern yourself accordingly", but it'll be a lot more fun!

Tuesday, December 15, 2009

Object Relational Mappers: Friend or Foe?

Object relational mappers (ORMs) such as LINQ, Hibernate, and ActiveRecord bridge the gap between the relational database world, and the object-oriented world. By abstracting the database into "virtual database objects", they let programmers develop in any language and environment that they like without ever writing a line of SQL. It sounds great, but is the ORMs sweet song actually a siren's call? This talk will put ORMs on trial to help us find the answer.

Which brings us to this webcast...

Object relational mappers (ORMs) such as LINQ, Hibernate, and ActiveRecord bridge the gap between the relational database world, and the object-oriented world. By abstracting the database into "virtual database objects", they let programmers develop in any language and environment that they like without ever writing a line of SQL. It sounds great, but is the ORMs sweet song actually a siren's call? This talk will put ORMs on trial to help us find the answer.

Sunday, November 22, 2009

Another slow(er) week

The total number of questions rose from 25 to 44. That's less than double, indicating that posting activity was slower in the second week. Also, a significant number of questions were of the administrative variety, so nobody better get excited about SQLA being crushed by its own success or anything like that.

Not yet, anyway.

The number of users rose from 29 to 35... nope, not exciting yet.

But they are good users, and they've been asking some great questions, more on that later.

Here are the StatCounter numbers for the nine full days since the hidden JavaScript thingie was plopped onto the "Sidebar Low" area (which sits at the bottom-right of some - not all - of the SQLA page displays):

Top Four Questions

Here are the top four new questions from SQLA's second week of operation:

It's hard being a tag

Tag population control is in effect at SQLA.

SQLA has its own Search facility, plus there's always Google, so there doesn't seem to be much point in defining a bazillion different tags for the purpose of improving search. Plus, it's really hard work (ask anyone who's had to create a book index), and very few people do a good job (ask me about the mediocre job I did on my book).

So, when a tag gets used more than once, it's going to get looked at to see if it should be renamed, eliminated, merged with another tag, etcetera, all with the goal of using tags to classify or categorize the question.

Users are completely free to define new tags, five per question if they want (the software-imposed limit). It's only when a tag gets used on more than one question is it going to be a candidate for euthanasia, er, revision.

Training begins for the backup administrator

A volunteer for the post of backup SQLA administrator has been accepted, and training begins on Tuesday. Then, while the primary administrator lies face-up by the pool, the backup administrator will be responsible for keeping SQLA free from evil-doers.

For that role the backup administrator will be wearing a supersuit, with cape.

Saturday, November 21, 2009

Apparently I don't check my email very carefully because I missed the original announcement of this webcast:

Cloud and Multi-Tenant Architectures with SQL Anywhere Webcast

SQL Anywhere supports a variety of capabilities that ease the shift from an on-premise database server to a hosted server environment. This session will examine different architectural approaches for designing and developing multi-tenant applications to be hosted in a SQL Anywhere Server. Additionally, several operational considerations for hosted SQL Anywhere environments will be discussed.

That takes care of part of SQLA's backup and recovery strategy: We trust Fog Creek, they operate on a far higher plane than Magnolia ever did.

(Magnolia was named after the Greek god of cluelessness, and it has now been re-launched as "Gnolia" which is Latin for "missed business opportunity".)

The second part of SQLA's strategy is the "verify" part of "trust but verify". It is expected that before the StackExchange service goes live (it is in Beta now, just like SQLA is), the "Download Your StackExchange Database" site management option will be implemented.

Having said all that, there are gaps, like "Will it be possible to upload the StackExchange data we downloaded?"

Wednesday, November 18, 2009

Here's an email I got Monday from the International Sybase User Group.

I am sure everyone from the USA is welcome, not just folks from the nothern part.

Caution: Candia can be cold in December so bring a sweater!

Dear Sybase Enthusiast,

Please join us on December 1st and 2nd, 2009 for the ISUG Canadian Technical Summit to be held in the Candian Capital - Ottawa!

For the first time, Ottawa will be hosting a two day event drawing together many Sybase professionals and 3rd party vendors from Canada and the nothern USA. This event will have two streams - Data Management and Development. You will be able to register for either one or better yet, make sure that you bring a colleague with you so that you can cover both agendas. There is even a planned "cocktail reception" on the Tuesday night (Dec 1st).

On the Development side we will see evangelist John Strano showing us the latest PB 12.NET from engineering, best practices and another training session. For Data Management we will have key Sybase personnel addressing key product futures, Mobility and presentations on comparing various Data Modeling approaches. You will have access to key Sybase technical staff from Canada and the USA to ask questions about Sybase products, interact with users utilizing the same technologies and gain key information into 2010 based products and features.

For more information and to see a complete list of topics covered, please see our registration page. You'll notice that to register you must be a member of the International Sybase User Group (ISUG). You can become a member directly at http://www.isug.com/na/SelectOptions.html or by joining the Ottawa Sybase User Group (OSUG) at http://www.isug.com/na/OSUG.html. All members of OSUG are automatically members of ISUG as well! You can register with a free Associate Member account, but we encourage you to become a full Regular Member if you can.

For questions about and support for this event, you can contact ISUG's North American Regional User Group Director and OSUG President, Chris Pollach (chris@isug.com). Chris will be happy to answer any questions you may have and assist you with registering if necessary.

Sunday, November 15, 2009

For you history buffs, work on SQLA began around 1 or 2 AM on November 8... seriously, six hours from project inception to live beta, it's that easy to set up a StackExchange site. Before 1 AM it was a just a to-do: "try out StackExchange beta".

In fact, the site was created from scratch twice, once using the wrong site name (gotta respect those trademarks!)

Here are the usage numbers

According to the SQLA software itself, there are now

29 registered users,

25 questions,

46 answers and

42 comments, and there have been

850 views.

According to StatCounter, for the one full weekday that it was told to accumulate SQLA statistics (Friday), there were:

532 page views,

83 unique visitors and

40 returning visitors.

Considering that there has been no mention of SQLA on any of the NNTP forums yet, I think the numbers are... low? high? just right?

Top Five Questions

Chosen subjectively by a panel of judge, based on content and activity:

Comments are being used as a substitute for threaded discussions

For many years I have clung to the idea that threaded discussions are absolutely necessary for having, well, discussions. One hundred percent of the HTTP-based implementations of threaded discussions suck, when compared with NNTP (newsgroups). That includes purely-web-based discussion sites, all the web interfaces to newsgroup servers, even the re-incarnation of Google Groups. They are either slow, or unusable, or both.

Only classic port 119 (NNTP) clients like Forte Agent work well for threaded discussions.

The trouble is, all the world's firewalls are closing down port 119. And new features are coming to the web, not newsgroups.

So, now we have SQLA. Wonderful interface, fast enough, but... no threaded discussions.

Or are there? It is starting to look like being able to

post multiple answers,

and post multiple (albeit short) comments on each answer,

AND post comments on the question itself,

as well as editing the answers and even the question,

is a pretty [expletive deleted] good alternative to threaded discussions.

Well, I'm not sure about the "editing the answers and even the question" part, that's not really happening yet and I'm not sure it should (see later point "Shy people are still shy").

Out there in the blogosphere people are having endless discussions using comments, with and without cheesy "nesting" as a substitute for threading, and the overwhelming volume of traffic says it's working. Here at SQLA, the multiple-answers-multiple-comments features should be much better/easier than that.

Plus, it's easy to type a twitter-style "@username:" prefix on your comment, the cyberspace equivalent of the meatspace shout "Hey, Joe, I think we might have a solution for you, come on over, let's talk!"

Maybe it will work, maybe it won't. I think it will. I think it will work very well for "How do I..." questions, and even better for discussions about future product features... possibly even better at that than the NNTP forum.

We'll see.

Will SQLA replace the NNTP forums? Don't know, don't care. It's like cellular, some people still use radio telephones ... honestly, I didn't know that either, but it's true, and they have reasons (like, "it's the law", stuff like that).

Will Breck still answer questions on the NNTP forums? Sure will! That's the point, answering the questions... not having a protocol war, but helping people.

What will happen when Sybase/iAnywhere implements a modern web-based community site? That's a good question ... [struggle] [microphone drops] ... We have no comment on that at this time.

Voter turnout is low

A total of 30 up-or-down votes have been cast. That represents a voter turnout of 1.5% according to the following absurd formula:

the fact that users don't necessarily form an opinion about every thing they read

and the fact that down-votes should probably be rare, perhaps almost non-existent.

Still, 1.5% feels low. Voting is a powerful feature for making SQLA better.

Maybe 3% turnout is a goal worth seeking. Plus, getting more folks who ask questions to check the answer they like the best. And getting folks to click the "Favorite" star.

Early adopters are behaving well

Nobody has voted anything down into negative-points territory, or at least nothing's been voted down past zero points. That's OK with me.

Nobody has edited anything except their own stuff, at least as far as I can tell.

In fact, nothing bad is happening at all. That is *not* a surprise.

Shy people are still shy

I don't know of any way to *guarantee* people will become less-shy about asking questions, but there are an infinity of ways to guarantee people will *not* participate. You can see some of those morale-destroying techniques in action over at StackOverflow. The software is wonderful but a significant minority of the active StackOverflow participants are arrogant [expletive deleted]s ... my [insincerity alert] apologies in advance, but there's no other way to put it.

That won't happen at SQLA, it's moderated.

But the real reason it won't happen is this: The SQL Anywhere user community consists of folks who are treat each other with respect.

So far, "SQLA Moderator" has been the easiest job in the world. It's probably illegal to have this much fun.

And finally...

Here is StatCounter's view of where SQLA page views came from for the last couple of days.

Question: Did the user on the shores of Hudson Bay at the southern border of Nunavut answer a question asked by the user at the end of PA279 near the Xingu river and São Félix do Xingu in Pará, or was it the other way around, or neither case?

SQLA is different from StackOverflow in several ways: in particular, discussions are encouraged and SQLA is moderated.

In other important ways, SQLA is the same as StackOverflow: it is (soon will be) Google-searchable, it will always be free to use, the reputation points and badge systems are the same, and free speech is protected (within the bounds of moderation).

(If you've never heard of StackOverflow, you're in for a real treat! ...it's the best thing since Google, as far as technical question-and-answer services are concerned.)

Caveat Emptor: SQLA is currently in early beta testing... VERY early beta, where "early beta" means "likely to change". Not only is the SQLA site in beta, but so are the rules of moderation (see the unfinished FAQ), and so is the StackExchange service and the software it provides. Also, note that SQLA is not owned by Sybase.

HTML code... click on the "Copy" button over there --->

New website for SQL Anywhere questions and answers

One of the founding members of Team Sybase has started a moderated community-based website for questions and answers about SQL Anywhere and related topics: sqla.stackexchange.com

SQLA is different from StackOverflow in several ways: in particular, discussions are encouraged and SQLA is moderated.

In other important ways, SQLA is the same as StackOverflow: it is (soon will be) Google-searchable, it will always be free to use, the reputation points and badge systems are the same, and free speech is protected (within the bounds of moderation).

(If you've never heard of StackOverflow, you're in for a real treat! ...it's the best thing since Google, as far as technical question-and-answer services are concerned.)

Caveat Emptor: SQLA is currently in early beta testing... VERY early beta, where "early beta" means "likely to change". Not only is the SQLA site in beta, but so are the rules of moderation (see the unfinished FAQ), and so is the StackExchange service and the software it provides. Also, note that SQLA is not owned by Sybase.

Friday, November 6, 2009

This little procedure's been floating around for years and years, and the source code sits in various inaccessible places like the NNTP newsgroups, the execrable CodeXchange and a sub-sub-sub-folder on my C drive called "miscellaneous".

Time to put it somewhere accessible: Google.

As in, "on an actual web page".

As in, here...

How to get full control over a SQL Anywhere server

CREATE PROCEDURE p_drop_other_connections ()BEGIN

/* How to get full control over a SQL Anywhere server...

(1) CALL sa_server_option ( 'ConnsDisabled', 'ON' );

At this point, any attempt to open a new connection will get "Connections to database have been disabled". However, current connections will still work until...

Thursday, November 5, 2009

If iAnywhere Solutions ever put on a standalone SQL Anywhere Developer Conference (they won't, alas), Eric Farrar would be enough of a "big name draw" to pack the place. Anyone else on the agenda would be icing on the cake.

Let me put it this way: If, after hearing an Eric Farrar presentation, you're not more enthusiastic about your job and about SQL Anywhere, you need medical attention.

(Right about now, his co-workers are probably saying, "Look, Eric, Breck has a man-crush on you!")

Not true. If I was going to have a man-crush on anyone, it would be Brad Pitt.

ANYway...

You don't need to wait for any conference, you can see the latest of Eric's offerings here... UltraLite replacing SQLite in Google Gears... a browser-side SQL Anywhere database, with MobiLink synchronization. Woohoo!

Yes, in these modern days of 64-bit Windows 7 and Core 2 Duo processors and giant drives, I'm running Windows XP on a 3.4GHz Pentium 4 with 2G (gasp!) of RAM and a 100G drive. It's my "main machine", the one I lug around to clients, the one I do almost all of my work on. With seven different versions of SQL Anywhere, plus Oracle 10g, SQL Server 2008, Visual Studio, blah blah blah... hundreds and hundreds of thousands of files.

And yes, my computer runs hot. Much too hot to hold in my lap. I even carry a little portable fan with me, to keep it from overheating and shutting down. And battery life sucks, less than one hour on a brand-new battery. But it's fast, fast, fast...

mostly...

Over time, as more and more software is installed and removed, more files created and destroyed, performance on this laptop starts to slip. Such was the case this weekend when it became just too unbearable to wait 10 or 20 seconds for Wordpad to open a tiny text file.

Memories of mainframe, memories of 4.77Mhz.

No, there's no viruses or spyware or rootkits running, the router and firewall and Norton 360 and Windows Update take care of that. No apparent disk fragmentation, Diskeeper takes care of that (every time the screen saver kicks in, Diskeeper does some defragmentation).

I say "apparent" because if you go hunting around in the Diskeeper menus, you see there's a feature called "boot-time defragmentation"...

Tip: Don't check "Produce a summary log file" if you don't have a locally-attached drive that will be available during the boot period, other than the drive being defragmented. In other words, no network drives and (I think) no USB drives. Diskeeper won't warn you, but the boot-time defragmentation will skip most of the steps and you'll have to start over. You can specify a floppy drive A:, but who has floppies any more? No summary log for me, my laptop only has the C: drive.

The Diskeeper docs promise that the boot-time defragmentation process will do lot of work the regular defragmentation doesn't do, and that does seem to be the case. The reboot process took longer than an hour including before-and-after CHKDSK runs; the hard drive light was on solid for the whole time.

Tip: When Windows asks to run CHKDSK because "One of your disks needs to be checked for consistency", let it run. Otherwise, Diskeeper will refuse to run any regular defragmentation steps and you'll have to reboot and let CHKDSK run.

After the boot-time defragmentation Diskeeper still showed a few "Pink - low performing system files"; not a lot, and much fewer than beforehand. Another regular defragmentation run got rid of almost all the pink areas, now it's all beautiful "Blue - high performing files and folders"...

I don't have screenshots from before all this, but here's what it looked like beforehand:

No "Red - low performing files and folders" at all, probably because Diskeeper runs regularly during screen saver periods, but

A large number of "Pink - low performing system files" areas that did not go away no matter how many times I did full manual Diskeeper defragmentations.

So how's performance now? A lot snappier, files and folders seem to open much more quickly, or so it seems. No hard numbers though :)

Tip: There's a wealth of information about Windows disk fragmentation on the Diskeeper site, including Myths of Fragmentation which includes this gem: "A freshly-installed operating system starts off with its files so badly fragmented that a simple defragmentation can decrease the boot time from 153.9 seconds to 111.8 seconds."

Friday, October 30, 2009

For many years, SQL Anywhere has used Microsoft Compiled HTML Help files (*.chm) to deliver all the manuals in one easy-to-read, easy-to-search package on your desktop, one that doesn't make you wander through endless menus and wait for slow-loading web pages.

These files are easy to copy, too, and sometimes folks copy the Help files from machine to machine without using the official SQL Anywhere setup.exe... and they zip them up and send them by email as well.

Suddenly, however, that process has become fraught with error: On the receiving end, when you open the *.chm file, you get "Navigation to the webpage was canceled" no matter which topic you navigate to:

Steven McDowell has the answer (and by the way, the problem DOES NOT AFFECT files installed by the official setup):

from Steven McDowell to Breck Carterdate Wed, Oct 28, 2009 at 5:11 PMsubject Files downloaded from our FTP site can be "blocked" on XP/Vista/Win 7

Hi Breck,

FYI ... my investigation of your "blocked" CHMs seems to have determined what is going on.

Steve

It appears that a recent Windows security update may have further restricted the use of CHM files on XP and Vista. I don't know if other file types are affected by this, but I thought I would share the info since we often ZIP files and send them via FTP to customers and other staff.

Sneak peak: It actually appears to be an IE "feature". That's all you get here. See conclusion at bottom. :-)

It was reported that our CHM docs failed to open on an XP computer. The CHMs had been zipped here and copied via FTP. When they were unzipped, they were marked as "blocked" and the contents could not be viewed until each file was unblocked.

I have reproduced the problem on my Windows 7 system. I am an administrator but still have UAC turned on. Here is what I encountered:

* Unzipping to a local temp folder proceeds normally.

* When I try to open one of the CHMs, I get:

* If I leave the "Always ask" checkbox checked and click Open, the CHM opens, but the contents pane is blocked (with an unhelpful message). This behaviour is the same as trying to open a CHM from a network drive, but this is happening on a local drive.

* As long as the "Always ask" checkbox is checked, the contents pane is blocked for that file regardless of how many times you try to open it. Once I uncheck the box and click Open, not one but two things happen: 1) the contents pane is visible, and, 2) the dialog no longer appears. So something in the file's attributes has changed permanently.

* I picked another CHM file that had not been opened yet. I displayed its Properties. Note the Unblock button and the text to the left. This button and text are not displayed in the Properties for CHMs that were opened with the checkbox turned off. So, the "Always ask" checkbox has the dual purpose of suppressing the dialog and marking the file as unblocked.

Fortunately, it appears that using our doc install does not trigger this blocking problem. There is the possibility that other file types sent via ZIP (and probably other file copy methods) will also encounter these issues.

I did find one discussion (in the social.msdn.microsoft.com link below) that suggests it is IE that is noting that the files came from "the Internet" and marking them as blocked. I'm not sure how that works inside a ZIP file, but it appears to. Possibly non-IE transfers will not do this. CONFIRMED! I downloaded the same ZIP file via FTP using Firefox instead of IE and the files are not blocked!

Wednesday, October 28, 2009

Well, not exactly false, but easily fixed so the technique will work on earlier database files: Versions 5.5, 6, 7, 8 and 9. It turns out that while the Version 11 engine can't run a Version 9 database file, that restriction does not apply to the dbunload utility. In order to facilitate database migrations, dbunload contains the ability to read earlier database files... and that ability extends to ordinary dbunload-to-a-SQL-file steps.

And that bit of magic makes it possible to compare all versions of databases. Here's the full rewrite of the original article:

Question: How do I compare the schemas of two SQL Anywhere databases?

Answer, from Bruce Hay at Sybase iAnywhere: Check out the new "-no" option for dbunload in SQL Anywhere 11 (unload objects in name order, no data)? It's intended for this purpose.

Here's how it works:

Step 1: Run dbunload against the first SQL Anywhere database to create a text file containing CREATE TABLE and other schema definitions. The new -no option tells dbunload to unload just the schema, not the data, and do it in name order so it will be easier to compare with another text file sorted in the same order.

Step 2: Run dbunload again, on the second SQL Anywhere database, to produce another text file.

Step 3: Run a text comparison utility like Compare It! to show the differences between the two text files.

[ Keep reading, the new stuff is coming soon... ]

Here's why using dbunload is a good approach:

The dbunload utility doesn't miss anything; it unloads every single schema object in the database... tables, views, procedures, triggers, everything.

The dbunload utility is always going to be more up-to-date than any other software except the SQL Anywhere engine itself. In other words, even PowerDesigner falls behind dbunload in terms of supporting SQL Anywhere syntax for new features.

Running dbunload on actual databases guarantees you will be comparing the current database schemas, not out-of-date information stored is separate model files.

But Wait! It's Not Quite That Easy!

How you code the "Run dbunload" parts of Steps 1 and 2 depends on what version of SQL Anywhere was used to create each database:

If the database was created with Version 5.5, 6, 7, 8 or 9 you have to run the Version 11 copy of dbunload.exe without starting the database first. Use the -c "DBF=filespec.db;UID=xxx;PWD=yyy" connection string so that the Version 9 engine inside dbunload.exe will be used to start the database. Yes, this is magic!

If the database was created with Version 10 or 11, you do have to start the database first, using the Version 11 engine dbeng11.exe or dbsrv11.exe, and then run the Version 11 copy of dbunload.exe with the usual client-server connection string -c "ENG=aaa;DBN=bbb;UID=xxx;PWD=yyy".

That's not just magic, it's also complicated and confusing. That's why I've created three examples you can copy and edit: Compare Version 10 and 11 databases using the second technique described above, then Version 5.5 with Version 9 using the first technique, and finally Version 5.5 with Version 11 using both techniques. These examples all use the demo databases that come with the various versions of SQL Anywhere.

Example 1: Compare Version 10 and 11 Database Schemas

This script is easily changed to work with two Version 10 databases, or two Version 11 databases.

* based on SA11 web server and full text search (no Apache or PHP)* preview area added to "Submit a comment" dialog; your comment is formatted and displayed while you type* index is much faster now* bookmarks for pages are more readable

Friday, October 23, 2009

(Updated August 3, 2012 to include Version 12, plus a screenshot of the Plan Viewer.)

Graphical plans are wonderful things. Graphical plans tell you everything you need to know about why a query is running slowly.

Even if you don't have a clue about how to read a graphical plan, it's still wonderful because you can capture it in a file and send it to someone who does know how to read it ( like Glenn Paulley ) or someone like [cough] who pretends to know :)

Here are the steps for capturing the graphical plan, for Versions 8 through 12. At the bottom there's a "Tip" that is very important, but only for Version 10 and earlier.

How To Capture A Graphical Plan In Version 11 and 12

- If desired, run CALL sa_flush_cache() in dbisql.

- Type or paste your query into dbisql.

- Choose Tools - Plan Viewer.

- Select Statisics level: Detailed and node statistics.

- Select Cursor type: Asensitive.

- Select Update status: Read-only.

- Click "Get Plan".

- When the plan appears, click "Save As...".

- Use Save as type Execution plan (*.saplan).

Click on this screenshot to see it full size...

How To Capture A Graphical Plan In Version 10

- In dbisql, choose Tools - Options... - Plan.

- Check "Graphical plan with full statistics".

- Check "Assume read-only cursor" if that applies.

- Pick "Assume cursor is: Asensitive" if that applies.

- Click "OK".

- Run CALL sa_flush_cache() if desired.

- Run your query (see Tip below).

- Do File - Save As - Save as type: XML (*.xml).

How To Capture A Graphical Plan In Version 9

- In dbisql, choose Tools - Options... - Plan.

- Check "Graphical plan with statistics".

- Check "Assume read-only cursor" if that applies.

- Pick "Assume cursor is: Asensitive" if that applies.

- Click "Make Permanent".

- Run CALL sa_flush_cache() if desired.

- Run your query (see Tip below).

- Do File - Save As - Save as type: XML (*.xml).

How To Capture A Graphical Plan In Version 8

- In dbisql, choose Tools - Options... - Plan.

- Check "Graphical plan with statistics".

- Check "Assume read-only cursor" if that applies.

- Pick "Assume cursor is: Asensitive" if that applies.

- Uncheck "Show UltraLite plan" unless you want that.

- Click "Make Permanent".

- Run CALL sa_flush_cache() if desired.

- Run your query (see Tip below).

- Do File - Save As - Save as type: XML (*.xml).

Tip for Version 10 and earlier: If you do use dbisql to execute a query and capture a graphical plan with statistics, use SQL - Get Plan (Shift+F5) instead of SQL - Execute (F5 or F9). If you use Execute, the query will be executed twice: once to determine the result set and a second time to capture the plan. This plan may be completely different from the one used the first time the query was executed, and not just because the cache contains the rows. For example, if the plan is for a DELETE, the first execution will actually delete the rows so the plan may be for the completely different "zero rows" case. If you use Get Plan (Shift+F5) the query will only be run once, and the plan will match.

Friday, October 16, 2009

Maybe I've been living a sheltered life, but until recently I've never seen SQL Anywhere have a Cache Panic. Not once, not in production, not in testing, not even while testing Foxhound which is supposed to display the number of Cache Panics on the monitor page.

What is a Cache Panic? It happens when SQL Anywhere needs to allocate a page in the cache but can't find one. Sounds bad, but what with dynamic cache sizing and the temporary file, I'd never seen one.

At one point, I considered removing Cache Panics from the Foxhound monitor page to save valuable screen real estate (if you've seen the Foxhound monitor page, you know what "crowded" means when it comes to displays :) However, a wise voice counseled me to leave it alone, saying that it's an important statistic.

Not important enough, however, for me to include a Cache Panics threshold in the Foxhound email alerts feature... that was a bad decision, about to be fixed... because I've just seen my first Cache Panics, and believe me, they are bad:

What the Foxhound figures above don't show is how badly performance can be affected when Cache Panics are happening. When you start a SQL Anywhere database using the in-memory feature (dbsrv11 -im nw in this case), there is no temporary file, and it doesn't write anything to the database file, so SQL Anywhere has no place other than RAM to put a page of data. If the RAM cache is full, SQL Anywhere has to allocate more RAM.

It gets worse: If you don't specify the -c initial RAM cache size value when you start the server, and if you perform some intense operation like LOAD TABLE a million rows, SQL Anywhere will have to allocate RAM at a furious pace... all the while Cache Panicking when it can't find a page to allocate.

Who cares? If performance is important, you care. In this case adding -c 1200M to the dbsrv11 command line improved LOAD TABLE performance by 165%, from 2,878 rows per second to 7,640 rows per second.

Grow In-Memory Cache Faster

Here's what the server console log looked like when it had to grow the cache a zillion times, up to 10 times per second:

Most of the time, growing the cache step-by-step works just fine... but not when the database is growing rapidly and it's all stored in memory.

The Suggestion...

So here's the Product Futures suggestion: When a SQL Anywhere database is running in-memory, and a lot of Cache Panics are happening, and it's having to grow the cache ten times a second, then the dynamic cache sizing algorithm should pick some increment other than 64 pages when allocating more RAM.

That way, we won't have to fiddle around trying to pick exactly the right -c initial cache size.

...Or, Maybe Not

It's a little known fact (well, I didn't know it) that picking a reasonable value for -c is a good idea regardless of the cache growth performance issue described here.

For example, if you expect the cache to grow to some size between 1G and 2G, you should specify a value like -c 1G instead of letting it default to the tiny value (on Windows) of 2M. The server uses hash chains to find data in the cache, and the number of hash chains is determined by the initial cache size. A cache that starts at 2M will have fewer hash chains and those chains will grow very long (and slow) as the cache grows to 2G; performance will suffer, even if the Cache Panics problem is solved. And that performance problem will persist even after the cache finishes growing.

On the other hand, a cache that starts at -c 1G and grows to 2G will have more, shorter chains, and better performance. In other words, you don't have to pick an exact value for -c, but you should avoid letting it default to a stup..., er, wildly inappropriate value (like 2M) if you know it's going to grow to at least 1G.

And, of course, the Cache Panics problem won't be nearly so noticeable if you pick a half-decent value for -c. Plus, it goes away after the cache finishes growing.

Wednesday, October 14, 2009

Sybase forum users will recognize "SQL Anywhere Product Futures Discussion" as the title for the sybase.public.sqlanywhere.product_futures_discussion newsgroup on the forums.sybase.com NNTP server. It's available from your web browser, but I won't put a link here because every one of the several HTTP / HTML interfaces to the NNTP forums are, well, inadequate.

Besides, the NNTP forums are reaching the end of their usefulness, which is why I'm not going to post suggestions there any more... I'm going to post them here, starting now...

DB_PROPERTY ( 'BackupInProgress' )

There should be a database property which answers the question, "Is a database backup process in progress?" It would apply to the BACKUP statement and the dbbackup.exe utility.

The question arises, should the answer be as simple as yes/no, or more complex? The continuous live backup process comes to mind, where the answer would always be yes... maybe live backups should be excluded.

And what about database versus log-only backups? Should the response be db/log/both/no?

That's what "discussion" is for; if you have an opinion, post a comment here.

One reason for having this property is so database monitor programs, like Foxhound and the SQL Anywhere Monitor, could adjust their behavior when a backup is running; e.g., they could recognize that server I/O processing will change in behavior, and that checkpoints will be suspended.

Another reason might be so you can write code that starts a backup as long as one isn't running already, perhaps a second redundant process that makes sure a backup is taken even if some other backup process failed.

DB_PROPERTY ( 'BackupSuccessfulAt' )

This database property should answer the question, "When was the last backup taken?" It should return NULL if there never has been a successful backup, or the timestamp when the most recent successful backup finished.

In my opinion, this request doesn't need any justification... it's hard to think of any question more important than one involving database backups. And today, there is just no reliable way to answer the question, certainly not easily.

I'm guessing live backups don't count here, if only because then never finish unless there's a problem; e.g., the server crashes, so dbbackup loses connectivity... and a database property is useless if the server ain't running.

Should there be different properties for different backup types? e.g., 'DBBackupSuccessfulAt', 'LogBackupSuccessfulAt', 'FullBackupSuccessfulAt'... more discussion!

Monday, October 12, 2009

"It would be a nice feature if developers could implement a Global Exception Handler. In case of errors in procedures invoked via a database event there is only a error message in the database console. With a Global Exception Handler I could develop a general error notification e.g via email. There is an event for after RAISEERROR. Does there exists somthing like that for native database exceptions?"

Eric Farrar thinks it's a good idea too, which is wonderful news; Eric is highly respected throughout the SQL Anywhere community and his word should carry some significant weight:

"Unfortunately, there is nothing inside of SQL Anywhere to allow you to specify a top-level exception handler that will be called on any native database exception. I do understand why this would be useful (especially in a web context), and I have passed this idea on to the appropriate engineers."

Exception handling plays a huge role in the web-based Foxhound database monitor, and I would love to be able to code an all-encompassing global exception handler to record errors for later diagnosis.

One problem facing a global exception handler, however, is providing an answer to the question "Where, in all the thousands of lines of SQL code, did this exception occur?"

That's a hard question to answer even for a local exception handler. One solution (used by Foxhound) is to maintain a "where am I?" variable as the code executes just so an exception handler can include it with the other information it saves and/or displays.

For a global exception handler, one could use a global CREATE VARIABLE to store the "where am I?" value, but I would rather not... I would rather have SQL Anywhere provide the source code line number and object name (procedure, event, service, etc).

Sunday, October 11, 2009

Let's all welcome Microsoft and T-Mobile's Sidekick service as the latest Database Deadpool winners!

Here's the reported time line for the T-Mobile disaster:

Friday, October 2, 2009

Microsoft, whose Danger subsidiary powers the [T-Mobile] Sidekick service, said it is "working urgently" to deal with the problems, which interrupted data service for most Sidekick users starting at 1:30 a.m. PDT on Friday.

Sunday, October 4, 2009

...it seems that Danger's data center has been on the fritz for a solid two days now with symptoms ranging from text message weirdness to dead address books and everything in between, up to and including a completely unusable experience. Frantic calls to T-Mobile are resulting in comped bills for some subscribers, so if you're affected, you might want to give 'em a ring.

Thursday October 8, 2009

We sincerely apologize for the inconvenience, and appreciate your patience as Danger works hard, around-the-clock to resolve this issue. T-Mobile will continue to keep you updated as we learn more. Thank you!

Regrettably, based on Microsoft/Danger's latest recovery assessment of their systems, we must now inform you that personal information stored on your device - such as contacts, calendar entries, to-do lists or photos - that is no longer on your Sidekick almost certainly has been lost as a result of a server failure at Microsoft/Danger.

This latest large-scale publicized data loss will surely lead to managers everywhere forwarding a link to the story to their IT departments asking "what are we doing so that this doesn't happen to us." ...If you read almost any technology website or newspaper, you could be forgiven for thinking that "The Cloud" solves everything....In the Danger case, it appears from initial speculation that the data was lost because they attempted to upgrade a storage array without backing it up first. Here is a case of smart and rational people who do this for a living at one of the best companies in the world, and they didn't even bother making a backup..

Wednesday, October 7, 2009

If you have scheduled events in your database, and those events contain logic that depends on the DatabaseStart event finishing beforehand, it's up to you to make sure your scheduled events don't start running too early.

For example, let's say you have critical initialization logic in your DatabaseStart event, and your scheduled events contain other logic that assumes the initialization process is complete before they run. Since every event runs on its own connection, and connections run asynchronously, there's no guarantee that the DatabaseStart event will finish before your scheduled events start firing.

One strategy is to add more logic to each scheduled event to check if the DatabaseStart event has finished running, and to immediately RETURN if DatabaseStart has not finished.

Here's one way to implement that strategy:

Add code to the end of the DatabaseStart event to record the CURRENT TIMESTAMP in a single-row table.

Add code to the beginning of each scheduled event to check if the SQL Anywhere server startup timestamp is later than the timestamp recorded by the DatabaseStart event.

If the server startup timestamp is later than the DatabaseStart timestamp, then RETURN from the scheduled event.

If the server startup timestamp is later than the DatabaseStart timestamp, then the DatabaseStart timestamp must have been recorded by a previous execution of the DatabaseStart event during a previous server startup... which in turn implies the DatabaseStart event for the current server startup hasn't finished executing.

Eventually, the DatabaseStart event will finish, and the next time the scheduled event fires it won't execute the RETURN.

Here are some excerpts from the Foxhound database monitor; first, a single-row table contains the DatabaseStart timestamp:

Wednesday, September 23, 2009

It's been a couple of years since the I discovered the first way to slow down BULK INSERT: leave the PRIMARY KEY and other indexes in place while inserting data. The solution to that problem is to drop all the indexes, run the BULK INSERT and then recreate the indexes... it can be orders of magnitude faster that way.

Why am I talking about BULK INSERT? Because SQL Server is often used together with SQL Anywhere, and it's often necessary to ship data back and forth between the two databases, and BULK INSERT is a very efficient technique... sometimes. The online docs do talk about Optimizing Bulk Import Performance, but the subject of indexes is mentioned only in passing: "How to best increase the performance of a particular bulk import operation is influenced by the following factors: ... Whether the table has indexes."

What's the new way to make BULK INSERT slow? Leave the Windows System Restore feature enabled on the disk drive holding the SQL Server database. That increases disk I/O enormously, and the I/O is split between two different files in two different locations on the same drive... thus making the head move back and forth, slowing things down much more than if only one file was affected.

I've never noticed this kind of disk activity before, perhaps because I haven't seen a display like the following until after installing Windows Vista with its wonderful Resource Monitor:

In the picture above you can see the heaviest write I/O involves a file in the C:\System Volume Information folder which is where Windows automatic restore points are recorded. The SQL Server database and log files (main.mdf and main.ldf) come second and third in terms of write I/O, and this ranking persists for the entire time the BULK INSERT is running.

And then there's all that database file read I/O... I'm guessing all of it is needed for the System Restore feature, as you'll see later when it all disappears.

The solution is to follow the instructions here and turn Windows System Restore off for the disk drive affected, as follows: Control Panel - System - System Protection - Automatic restore points - clear the checkbox next to the drive letter:

Turning off restore points made a dramatic difference in the BULK INSERT disk I/O, which in turn resulted in the elapsed time dropping from 1,612 seconds to 512 seconds for a 1.9 million row data set; that's a factor of 3 improvement. It did this by completely eliminating all I/O to the System Volume Information folder, which in turn allowed the database file I/O to run much faster. It also completely eliminated the database file read I/O as shown in this picture:

Does the Windows System Restore feature affect SQL Anywhere database? I'm not sure, but I think the answer is "no". I tried loading the same data into a SQL Anywhere 11 database via LOAD TABLE and there was no activity in the System Volume Information folder at all.

Oh, and I would love to tell you how much faster SQL Anywhere's LOAD TABLE was than BULK INSERT, but I can't... that would be a "benchmark test", a violation of the license agreements, and it might even exhibit some kind of "measurement bias"... who, me, biased? <g>