I have column having primary key(cluster index) on it in a table
.But still there is a problem in performance of a query.But
when i create a non clustered index on that column again
suddenly the performance of query increases.so can we create non
clustered index on a column having primary key on it?

It depends whether your primary key is a natural or surrogate key.
If it's a natural key, the performance would be drastically affected when
creating a cluster index on that column because a cluster index rearranges
the physical sequence of the rows in the table. So, every time you add or
change data in the columns that make up the key, the server has to
physically rearrange the rows.
So, I have a policy to never use natural keys as primary keys nor create
clustered indexes on natural data.
Except for philosophical and emotional reasons, there is no practical reason
for using natural keys at all in a database. Creating indexes on natural
data is unavoidable, but then they should never be clustered indexes.
Just my 2c's worth.
Nico

Of course there is reason to use natural data as primary key, or, more
accurately stated, put a primary key constraint on a true primary key. You
can gain more utility with less indexing. And, of course, we all know
indices are free.

But, again, a missing answer to the question, "why would one EVER cluster on
a unique value?"

Saying that a 'true primary key' necessarily consists of natural data is to
allow the frailties and imperfections of user data to infiltrate the
key-structures of the database. That's poor (if not dangerous) architecture.
Pure key-structures of a good, stable, and efficient database are always
performing better when they are created from data that are neutral and
absolutely devoid of user-input. Keeping key-structures pure means having no
user data in them, at all, ever.
From the data point of view, serial numbers are surrogate and user data is
natural. From the key-structure point of view, serial numbers are natural
and user data is surrogate. So, it's a misnomer to refer to serial numbers
as surrogate keys. More accurately, they are the native SS keys while user
data is the surrogate. But, since the misnomer has become the norm, I'll
continue to refer to the 'true database keys' as surrogate keys, and user
data keys as natural keys.
The argument that natural keys give more utility with less indexing is just
pure bad economics. User data in keys are very expensive, even more so when
they are incorporated into clustered indexes. It's much cheaper to have an
extra index consisting of a surrogate key than either a primary or foreign
key made up of user data.
The mere thought that the physical rows of a database have to be rearranged
when a cluster index's column data changes or are added to, mid-transaction,
is enough to make one cringe. The benefit of a clustered index, that it's
best utilized on ranges, is lost when a change to the underlying data of the
index occurs. Clustering on a unique, consecutive value (not just any unique
value - that question is misleading) has the immense value that the physical
order of the rows is cast in stone and no user-data change can cause the
server to go into a rearranging exercise, ever. Additions go to the end of
the table and nobody ever changes the series, period.
Having said all this, there are a very small window of exceptions where
natural data can, in its nature, overlap the characteristics of surrogate
keys, such as unique, consecutive order numbers; but in that case, the risk
of creating exceptions that could muddy up a clear, simple architecture is
too great and the surrogate key should stand next to the order number.
The question is rather, "why would anyone ever cluster on user data?"
If there are large ranges of data that are accessed frequently, partition
them into a separate table constrained through surrogate keys and then
cluster-index on the values. In that way you have kept the database's
key-structures pure and free from user-data and utilized cluster indexes for
what they were designed.
Nico

"Clustering on a unique, consecutive value (not just any unique
value - that question is misleading) has the immense value that
the physical
order of the rows is cast in stone and no user-data change can
cause the
server to go into a rearranging exercise, ever"

It can also kill any of the underlying fillfactor and padding
options you have set. If you're dealing with an OLTP system, it
can also create a hotspot as users compete to insert a new
record.

Order numbers are generated by the order entry (POS) system, a non-human
user, and are NOT surrogate nor undependable. "Natural" data does NOT infer
that the data was keypunched, an assumption of a poor architect.

Serial numbers are NOT surrogate if we mean a number assigned outside of the
database like automobile VIN numbers. They belong to the entity and
determine the data schema -- they are TRUE candidate keys.

Now if serial number is being used as a misnomer for identity field values
then it is CERTAINLY a surrogate key. It exists AFTER the entity only in
the database. That is the definition. The issue is not user vs dba, that's
politics, the issue is entity vs application, that's computer science.

That my order table has order number as its primary key and my order detail
has order number & line number as its allows me to find out how many orders
there were for a particular part WITHOUT joining to the order table and
eliminates the need for the EXTRA index on the surrogate value as a foreign
key not to mention the overhead of creating TWO sets of ersatz entity values
at. This is GREATER facility with LESS expense.

To create a clustered index just because it's there and then argue that it
is best to have it immutable is folly. If you're not supporting ranged
searches and don't want physical re-indexing then DON'T declare one.
Clustered DOES NOT equal primary key constraint. That all decisions are dba
based rather than business based is EXACTLY where architecture fails.

Clustering on a unique, consecutive value casts the physical order of the
rows in stone. In database design architecture, that's a huge plus if the
primary relationship key (not the primary key) and its foreign keys are
designed properly.
Having it on a random unique value is pointless.

If the table is of a "root" nature in hierarchical terms, the choice of
column for the clustered index is not obvious EXCEPT that it is definitely
NOT a primary key or identity column. It should be employed to support the
most needy order by or group by situations. For "non-roots" it should be on
the neediest foreign key column.

There is a definitive line between the database integrity and the user-data.
If the user-data is allowed to filter into the database's integrity
structure, there is a greater opportunity for the integrity to become
compromised. Why would any database architect worth his salts allow that?

<The issue is not user vs dba, that's politics, the issue is entity vs
application, that's computer science>
User=application; dba=entity. So, you just claimed politics are science?
Entity vs application makes my point exactly. Why would you have the fickle,
tainted, variable stuff from the application pollute that which should be
absolutely untainted and pure?

In principle, one mixes up business rules with database rules when user-data
is introduced to maintain the integrity of the database. Admittedly, most
business rules are managed within the database as constraints but they
should never be employed to maintain the database integrity.

<allows me to find out how many orders there were for a particular part
WITHOUT joining to the order table>
How would you do that without also querying on the part number? That shoots
your whole basis, that this is economical, out of the water.

<To create a clustered index just because it's there and then argue that it
is best to have it immutable is folly.>
I agree. Perhaps you should read again what I said. You made up the part
about 'just because its there'. Having a unique, consecutive clustered index
casts the physical order of the rows in stone, which gives the best
performance if it's used in a PK : FK relationship. Having a clustered index
on user data imposes a huge penalty on the server when the cluster index's
underlying data changes. Ergo, use cluster indexes very carefully and only
after fully understanding the consequences.

< Serial numbers are NOT surrogate if we mean a number assigned outside of
the database like automobile VIN numbers. They belong to the entity and
determine the data schema -- they are TRUE candidate keys.>
You still are dealing with tainted data that you use to maintain entity
integrity. How the VIN numbers get into the database is the point of
pollution. Only numbers generated within the database are true candidate
keys. In addition, when user data is used as keys, you have to contend with
varchars, and all sorts of datatypes. A varchar has extra overhead to
process as opposed to an integer which is fixed-length. Having a varchar in
a PK FK relationship is not the best architecture one can employ.

mdonnelly and nico,
You guys are both Young Guns, albeit of different breeds.

A little history... years ago, SQL Server did not have row level
locking. I know this because I was working about 11 years ago on Sybase.
Microsoft's
SQL Server is based on Sybase.

So instead of row level locking, you had disk page level locking. Now,
what happens in an OLTP environment when 100 users are trying to insert
rows into
the same table? If you do nothing else, you get massive contention, or
hot spots. Except that instead of slowing things down, you could easily
get into deadlock situations. Often there are multiple tables that need
data inserted before a transaction can complete. I know this, I was that
data architect for a call center app that had exactly this issue.

The solution: use a clustered index on a random, unique key; with a
medium
fill factor (say 60%). What this does: when many users are
simultaneously
trying to insert rows into the same table, the chances of two inserts
happening in the same disk page is low. Remember, hardware was slower,
and the DB server did not have row level locks; only page level. This
approach saved our collective asses.

One note: the "random" unique key wasn't really random. It was based on
a sequence, but the order of the digits was shuffled around in a
predetermined way.

So this is one answer the comments:
nico > Having it on a random unique value is pointless.

mdonnelly > Again, why would one EVER cluster on a unique value OR ever
in an OLTP?

Now that hardware is faster, and databases more advanced, people are
missing out on some of the subtle aspects of performance tuning. Because
even with row-level locks and 8-way 3.0GHz/32 GB servers over 36 disk,
1GB/sec SANs, you will *still* hit a limit. That limit may be 2000
users. If you want to be able to support 4000 users, you can buy more
brawn, or use your brains.

yes you can have non clustered primary key and you can also make the composite clustered index in order to make the primary key clustered one. You can make use of the following ms sql query in order to create the non clustered primary key and clustered composite key
CREATE TABLE EMPLOYEES
(
empid int NOT NULL CONSTRAINT ix_pkEMPLOYEES PRIMARY KEY
NONCLUSTERED
, name varchar(25) NOT NULL
, age tinyint NOT NULL
)
CREATE CLUSTERED INDEX ixcEMPLOYEES ON EMPLOYEES (empid,name)

Affirmative. Adding to previous post.
When you create a typical table as
CREATE TABLE Inventory (
InventoryID int identity(1,1) PRIMARY KEY,
Item....
............)
SQL Server creates it as CLUSTERED index on InventoryID.
The only requirement for PRIMARY KEY though is to be unique constraint. It
can be non-clustered unique index based.
A bit tricky to move the clustered index to a different column.

And what's difficult about moving a clustered index? The GUI does it all for you, you just need to be aware that all non-clustered indexes also get dropped and rebuilt at the same time (can take some time if you have a bunch).

>And what's difficult about moving a clustered index?
Probably, nothing for you and I.
I was posting to wider audience. Newbies are reading the threads also.
I prefer to use scripts for large tables.
Management Studio Object Explorer plays it the "safe" way, so changes on
large table may take much longer than using scripts. Also, scripts you can
save.
Kalman Toth, SQL Server & Business Intelligence Architect
SQL Server 2008 Training - SQLUSA: http://www.sqlusa.com/order2008grandslam/

Unfortunately, it's not the same benefit. Clustered indexes are indeed
the data. If you merely create an additional index on the FK, you will
be introducing multiple page reads - one for the index page and one for
the data page. Because the clustered index _IS_ the data, you've got
everything you needs from the row.

I would think that _targets_ of one-to-many relationships could benefit
from clustering the primary key on the _target_. PK's aren't always
accessed on a unique key only basis.

I don't subscribe to the concept of:
(You shouldn't even hesitate to make the primary key index of the
non-clustered variety.)

Clustering should be carefully evaluated on a case by case basis. It
can dramatically improve performance or devastate it.

Ok, I re-read your post on _clustering_ the FK, which would of course
include the data, but I would submit that you don't need the _extra_
primary key index if you clustered on the primary key in these cases.

The benefit of a clustered index is that it physically organizes the rows in
the order of the index, which, incidentally, becomes the base reference for
the other indexes. The most bang for your buck would be to cluster the
foreign key or keys that have duplicates because if duplicate keys are
encountered, it is not necessary to visit the index to determine the next
row but it can be read directly from the table since the next row would
physically be in the adjacent position, saving time. Clustering a unique
index has very limited value since it is never certain that the keys would
be read in sequence meaning a visit to the indexes is required anyway.

Just another tidbit. If your system calls for recreating the indexes, make
sure to reindex the clustered indexes first, because SS will have to redo
the nonclustered indexes if you recreate them before you recreate the
clustered index.
_____

The gist of the article is that, if you cluster a nonunique index, then
SQL Server will add a 4 byte, hidden value to make the rows unique. That,
at least to my eyes, reads like it will add both time and space overhead.

But indexing IS time and space overhead! The idea is that it is LESS
time and NOT TOO MUCH space to be worth the savings. If the idea behind
clustering is to bring together the things that will likely be needed
together into the same or nearby I/O then that's the cost. If the idea
is just to have a clustered index because it's a Microsoft default, well
then... well then the least expensive index is to have no rows in the
table!

That is true mdonnelly.
However, we are used to INT identity(1,1) PRIMARY KEY, that means a move
for the CLUSTERED index to a column like TransactionDate.
Kalman Toth, SQL Server & Business Intelligence Architect
SQL Server 2008 Training - SQLUSA: http://www.sqlusa.com/order2008grandslam/

Actually, no you can't_ do that. I can't even conceive of how one would
create _any_ sort of index on an index. ;-)

However, you CAN create your PK as a nonclustered index. Just don't do it
by clicking that little Key icon. You will need to go to the "Manage
Indexes and Keys" selection in Management Studio and add the PK from there
(being careful to make sure that you do NOT create it as a Clustered
Index).

Copyright 1998-2015 Ziff Davis, LLC (Toolbox.com). All rights reserved. All product names are trademarks of their respective companies. Toolbox.com is not
affiliated with or endorsed by any company listed at this site.