Database Administrators Stack Exchange is a question and answer site for database professionals who wish to improve their database skills and learn from others in the community. It's 100% free, no registration required.

I want to use separate schemas for objects with different purposes in our SQL Server 2008 database. Right now we use a fairly mind-numbing naming convention to indicate the purpose of a table or stored procedure, and the prefixes mean we have to scan five or six xharacters in before we even see the start of the unique name. I would like to use separate schemas for tables that are just used to drive the UI (menus, roles by person, etc.) and for those that are dimension tables vs. fact tables, etc.

My question is, will there be performance impacts from using multiple schemas (schemae?) was opposed to the good old dbo for everything?

This question came from our site for professional and enthusiast programmers.

1

+1 Good question - I'm not aware of any difference at all - its just a different name. Would be interesting to know if there are differences.
–
Jon EgertonSep 28 '11 at 14:30

2

The plural for schema is schemata
–
Tom H.Sep 28 '11 at 14:31

This shouldn't have any impact when you will use the schema name when using the objects.
–
Michal BarcikSep 28 '11 at 14:51

This. There is zero impact on performance but a lot of impact on just plain old developer usability.
–
NotMeSep 28 '11 at 14:54

LOL. That's a good point. However I am using EF, so I don't actually have to write the schema names when I write the UI code. I guess it's up to the ETL folks how they want to do it.
–
Lisa Zoglio MorganNov 10 '11 at 23:06

2 Answers
2

There may be an impact on the performance of the query optimizer, albeit a small one, depending on your coding style. If you reference a table without a schema, the optimiser has to try to identify the table first by checking tables within the users default schema (if there is one), then using dbo., and then everything else. If you explicitly reference the tables as schema.table, which is probably good practice anyway, then even this minor overhead will be avoided.

No difference in performance. However, you are using schemas right now (even if you don't know it).

The use of references to schema objects such as tables, stored procedures, UDFs, etc. that are not schema-qualified does have a performance impact. References should always be qualified by schema. Such unqualified references have to be resolved, and that happens like this:

First, look for an object of the same name and type under the default schema of the user under whose credentials the session was established (e.g. jsmith). If found, that instance is used.

Otherwise, look for an object of the same name and type under the schema dbo.

This has several effects:

Most of the time, two lookups are required to resolve the reference rather than the single lookup required if the reference is schema-qualified.

The execution plan obtained when the query/stored procedure/user-defined function is bound can't cached and reused.

The final effect that you'll only find—painfully—when something breaks is that different users may get different results from a given query or stored procedure. Something like select * from foo join bar may work fine for me as the db owner; it may be broken for user jsmith who, inadvertantly or not, created a table named foo under his own schema (jsmith.foo) in the same database.

For this reason, too, create and drop statements should schema-qualify the name of the object being created or dropped.