Tips and tricks

July 20, 2009

Great news for all the Topobase users and 3rd party developers. Autodesk just published the full Autodesk Topobase 2010 help on www.autodesk.com/topobase-help. It is now available online and accessible from everywhere you are.

Feature Rules Reference - Provides a brief overview of the concept of feature rules and includes sections that describe the basic server-side and client-side rules, the rules that you can use with feature classes in the Coordinate Geometry (COGO), Template, and Plot extensions, and rules that you can associate with the specialized feature classes of the Topobase vertical application modules.

April 30, 2009

The Topobase Administrator provides an easy way to manually create and maintain the data model for each document. However, there is a need to be able to accurately and repeatably replicate a data model, for example when moving from a development to a testing or production environment. Topobase has a very rich API for this purpose and a mechanism called structure update. This is used by vertical modules to ensure that the expected data model is present when they are added to the document. But somebody has to write the code to create the structure update.

With this use-case in mind, the open source project ModelIO comes to the rescue. It provides a way to extract a Topobase data model to an XMI file, and conversely, generate a data model based on an XMI data file - so no one has to write the code. The project also includes an FDO utility to perform the same actions on an FDO data store.

April 14, 2009

Here is a very nice example what is possible with Microsoft’s Silverlight 2.0 platform. The web page below shows a dynamic zooming inside a mapping control. It looks and feels similar to Google Earth but it works within the web browser. Currently supported platforms for Silverlight are Firefox, Safari and Internet Explorer on Windows, MacOS and Linux.

For the web developers: The Silverlight control which is beeing used is called DeepZoom and displays muli-scale images. Another use case where you can see this control in action is here: http://memorabilia.hardrock.com/

February 12, 2009

Topobase offers several different possibilities how to filter features. Simple attributive filters, SQL based filters or geographical filters. All these filters can be managed easily and also saved for later usage.

The following video shows a couple of interesting things which can help you searching for features. First you see how to digitize a polygon in the map in order to retrieve all the point features inside this polygon. Then you can add additional conditions, in this example I add a condition to only select the landmark points inside this polygon (select Additive Filter!). Then I'm saving this filter in the Filter Manager for later usage. If you want that other users can also use this filter you can make it public.

October 20, 2008

Although it is easy to access the Topobase Document object from within a PlugIn such as a Workflow, if you need to access the Topobase API when your code is invoked from the AutoCAD command line or a menu item, you will need a small trick.

The idea is to implement a DocumentPlugIn that provides a static accessor to the Topobase context, like the Application and collection of Document objects – the Singleton pattern. This may take this form:

using System;

using Topobase.Forms;

namespace SingletonPlugIn

{

///<summary>

/// Provides static access to Topobase context.

///</summary>

publicclassStaticConnectionDocumentPlugin : DocumentPlugIn

{

protectedstaticApplication _Application = null;

publicstaticApplication StaticApplication

{

get { return (_Application); }

}

publicstaticDocument StaticDocument

{

get { return (_Application.Documents.Active); }

}

publicoverridevoid OnInitComplete (object sender, EventArgs e)

{

_Application = Document.Application;

}

}

}

The only non-static method in the class is the OnInitComplete, which squirrels away the Application object – from which all other Topobase context items can be obtained – when the document is loaded.

You will need a TBP file or an entry in an existing TBP file to load it, something like:

September 09, 2008

The Topobase Forms and it's functionality is one of the most used components within Topobase Client and Topobase Web. By improving its performance you can speed up your standard tasks very easily. Just read the following Do's and Don'ts and think about where you can gain some performance.

What you should do:

·If you have repeating queries or part of queries, use a PL/SQL stored procedure to capsulate them. You can reuse it then. This has a huge impact on performance.

·Disable controls in grid mode (table view), if you don’t need them there.

·If you don’t need a Control (SQL Label/SQL Textbox) at all, remove it from the form and not only hide it.

·Think about if you really need a value to be displayed right now. You can also use a “Script Programmable Control” (It’s a Button the user can click and this executed a VB.NET Script where you can use the full Topobase API) to calculate the Value on demand and display it in a Message box instead.

Sample code:

Public Overrides Sub Button_Click()

dim myResult as string

Me.ConnectionTools.StrValue("select name from pipe where fid_parent=" & me.LngValue("FID"))

Me.Application.MessageBox("The pipename is:" & myResult)

End Sub

What you should not do:

·Don’t use SQLLabels if the data is already loaded into the form. Use the”Formula Textbox” instead. If your form is WW_LINE and you have a select like this:

„Select length * 1000 from ww_line where fid = {fid}“

The column “length” is already available in memory and accessible via the API. The API can be called via the “Formula Textbox”. So add a “Formula Textbox” with this code:

if me.IsNull("length") then

result="" ‘If the length has a NULL Value…..

else

result=Me.DblValue("length ") * 1000

End if

·Don‘t use SQLLabels if you only want to lock a control. This can be done by the properties of the control.

·Don’t use SQLLabels if you want to use an attribute of the feature in more than one tab of the form. Use the “Formula Textbox”instead.

·Check that your SQL is fast by using Explain Plan and also check that you have indexes where necessary.

Sample for the Topobase SQL Sheet:

“ExplainPlan select * from tb_info where id=100”

Watch out for a “full table scan” (this means the indices are not used at all) if you query less than 15% of the data.

·If you have several SQL Labels that query values from the same table the same way, then better use “Editable SQL Text Box“. As these are optimized for that case.

Sample:

“Select pipe_type from TT_PIPE where FID_PARENT = {FID}”

“Select pipe_color from TT_PIPE where FID_PARENT = {FID}”

“Select pipe_name from TT_PIPE where FID_PARENT = {FID}”

So only the result columns differ but the query “TT_PIPE where FID_PARENT = {FID}” is the same. If you use “Readonly SQL Textboxes” (SQL Labels) the system runs three Selects. If you use “Editable SQL Text Box“, then the system combines the three selects internally to one select:

But this only works if the WHERE part “where FID_PARENT = {FID}” is exactly the same. It has to be case sensitive and the same amount of blanks etc. Set the Property “Is Locked” to ‘TRUE’ so it acts like a SQL Label.

·Invalid select statments slow down your performance!

If you see “#ERROR#” in a SQL Label, then this means the select is invalid. This can happen to some of the records, maybe if the column you used in the select statement is NULL. To check the error, go to the record where you see the error and open the Form designer (CTRL+F). Then check the select in the SQL Assistant by executing it.

September 26, 2007

When a database is up and running, every connected process is either busy performing work or waiting to do so. A process that is waiting may mean nothing in the overall scheme of things - the user may have left to get a cup of coffee, for example - or it can be an indicator that a database bottleneck exists. This is where wait-based or bottleneck analysis comes into play.

Waits are conditions where a session is waiting for something to happen. Waits can be caused by a number of things from slow disks, to locking situations to various kinds of internal Oracle contention.The response time of a database operation consists of two factors, the service time (execution) plus the wait time:

Response Time = Service Time + Wait Time

How do you correctly practice bottleneck or wait-based analysis? First, it is imperative that you obtain not only the number of waits for a particular event but also the times involved for each wait. Having both lets you form a complete picture regarding the magnitude of wait-initiated performance degradation. (Of course, the CPU may be the actual bottleneck, but that's a subject for a different post.)

The view V$SYSTEM_WAIT_CLASS simply shows the wait times and counts across the new wait classes. You can use the following query to analyze the waits in your Oracle server:

selectWAIT_CLASS,

TOTAL_WAITS,

round(100 * (TOTAL_WAITS / SUM_WAITS),2) "WAITS %",

TIME_WAITED,

round(100 * (TIME_WAITED / SUM_TIME),2) "TIME %"

from

(select WAIT_CLASS,

TOTAL_WAITS,

TIME_WAITED

fromV$SYSTEM_WAIT_CLASS

whereWAIT_CLASS != 'Idle'),

(selectsum(TOTAL_WAITS) SUM_WAITS,

sum(TIME_WAITED) SUM_TIME

fromV$SYSTEM_WAIT_CLASS

whereWAIT_CLASS != 'Idle')order by 5 desc;

A sample output could look like this:

WAIT_CLASSTOTAL_WAITS WAITS % TIME_WAITED TIME %

--------------- ----------- ------- ----------- ------

System I/O805675.65629131.24

Commit 580504.045007127.79

Concurrency500173.483677220.41

User I/O268451.87176959.82

Configuration7004.49107425.96

Other10628.7442172.34

Network120419583.7129561.64

Application1269.091418.79

In this example it's easy to see that the bulk of the overall wait time is due to system or user I/O waits. This method is much quicker and more accurate than tallying individual wait events to get a global picture. The following list explains the different wait classes in more detail:

July 30, 2007

In the past couple of blog posts I showed you ways to tune your Oracle Database. Now I would like to share some other insights into Oracle. In this post I explain how to determine which tasks Oracle has spent most of its time on (since the last startup).

It’s important (and in my opinion also interesting) to know if your Oracle database uses most of its time to parse and execute your SQL statements OR to execute PL/SQL and Java code (stored procedures and triggers such as Topobase Feature Rules).

With the following SQL statement we can find the answer to that question:

select STAT_NAME,

ROUND((VALUE / 1000000), 2) "Time (sec)"

fromV$SYS_TIME_MODEL

order by 2 DESC

I have executed this SQL statement against our Oracle Performance Test server, so the results represent up to 200 concurrent Topobase Client users:

STAT_NAMETime (sec)

-------------------------------------------------- ----------

DB time8577.61

DB CPU5134.29

parse time elapsed5133.76

hard parse elapsed time4842.81

sql execute elapsed time4362.92

PL/SQL compilation elapsed time1477.02

hard parse (sharing criteria) elapsed time739.79

background elapsed time671.44

PL/SQL execution elapsed time471.47

sequence load elapsed time116.87

Java execution elapsed time103.66

...

Interesting values for a DBA or a Software Developer on Topobase are the following:

DB time:Amount of elapsed time (in microseconds) spent performing Database user-level calls. This does not include the time spent on instance background processes such as PMON. Compare this value against all of the other values contained in this table to determine where the bulk of the time is being spent.

parse time elapsed:Amount of elapsed time spent parsing SQL statements. It includes both soft and hard parse time.

sql execute elapsed time:Amount of elapsed time SQL statements are executing. Note that for select statements this also includes the amount of time spent performing fetches of query results.If it is extremely high, look for poorly performing SQL statements.

July 24, 2007

When working with huge datasets the task of disk-load balancing has become very important. Such databases are often too big to be fully cached in the memory. But users still expect instant response times. This leads to a major problem--as we all know the most time consuming part of a transaction is the disk I/O. So the question is: how can we reduce the number of disk reads and writes?

The answer to that question is easy; if possible allocate more memory to Oracle (SGA size), see also the previous post about Oracle Database Buffer Cache Hit Ratio. Otherwise, try to reduce the time to read and write from your disks by distributing the database files among all available disks. In my opinion, the following rule of thumb optimizes performance:

one schema per tablespace

one tablespace per database file

one database file per disk

One database file per disk reduces simultaneous I/O requests and improves performance. You can also reach this goal by disk striping, which spreads the load across multiple disks automatically.

Use the following SQL statement to gather the following information:

How fast are the read and write accesses to the disks? You might reduce these values by maintaining your disks, for example, defragmenting.

Does the read and write percentage of the database files (tablespace) seem to be OK? For example, if the SYSTEM database file has a lot of I/O operations, you may not have set the correct default and temporary tablespace for your Topobase schemas.

Are your I/O operations equally distributed among your disks (if you have more than one disk)? If not, consider moving the most used (or all) database files to separate HDDs.