Below given trigger works Fine but speed is slow its takes 4 second to return one records which is very slow to my desire speed . how can i increase its speed.

Triggers are more resource expensive, can i use constrains instead of This triggers or can it solve by cascading If any other way give best speed then that way send to me

I have two table VEHICLETRACK,Map_Places_India .I write a trigger on insert of VEHICLETRACK.when any records insert into VEHICLETRACKthen according to its inserted latitude and longitude i am calculating place of this latitude and longitude .Then this place addinto a TempAddPlace Table.I am using this Place for different purpose.In This trigger where i use

select @count=count(name) from Map_Places_India line

@count variable takes 2500 value means Loop executed 2500 times to Find place of one latitude and longitude

HiI have two table 1:VEHICLETRACK(Jrid,Deviceid,Latitude,Longitude,Timerecorded,id,State,Speed)

2:Map_Places_India(id,Latitude,Longitude,place)

when any records insert into VEHICLETRACK then i Find place from Map_Places_India according to matching inserted VEHICLETRACK latitude and longitude if there are no place in Map_Places_India for inserted latitude and longitude then i find nearest place of inserted latitude and longitude

You have just inserted a row in VEHICLETRACK , you are getting the @TriggLat and @TriggLon of that inserted row, and then using those values to find matching row(s) in VEHICLETRACK - which will include the row you just inserted ... and possibly multiple other rows, but only one row's value will be stored in @la11 / @lon1 - and that choice will be entirely random as you have written it, and quite possibly from a different row for each of the two SELECT statements ...

select @count=count(name) from Map_Places_India

This counts the number of rows in Map_Places_India table. It does NOT tell you what the maximum value in the [id] column is, or whether the [id] column has contiguous values - there is a very high chance that the values are not contiguous if it is an IDENTITY column - however, you may have created the [id] by other means and be sure that it is contiguous?.

I would have used:

select @i=MIN([id]), @count=MAX([id]) from Map_Places_India

for what you then proceed to do (notwithstanding that SetBased would be better as khtan has mentioned)

and if you must do it that way why didn't you reuse the value for @dislat in the calculation for @dislon to save SQL Server having to calculate "69.1 * 1.609344 * (@latdis2 - @latdis1)" a second time ??

If you are trying to use the Great Circle algorithm to find the nearest point, with decent efficiency, you need to select a handful of records which have the smallest difference by Lat & Long from your target into a temporary table, and then select the best from that small data set using the Great Circle / Haversine algorithm, so that the COS, SQUARE and SQRT etc functions are used sparingly and minimally to reduce CPU overhead.

Its probably sufficient to SELECT TOP 1 WITH TIES ordered based on the combined difference of Lat & Long, and then SELECT TOP 1 from that using the Great Circle algorithm. With a covering index on Lat, Long and the PK this should be very efficient.

And yet again someone is using the "Great Circle Algorithm" when old reliable Pythagorus is more than adequate for finding the closest location. About the only time I could see using the Great Circle Algortihm is when you want to find the point that is FARTHEST AWAY. Then it might actually make a difference.I propose that we, as a team, scour the internet for all references to the Great Circle Algorithm, delete them, and then lock the algorithm up in a vault. An application would be required from then on if anybody actually wanted to see it.And that would take care of all the developers who think they are so smart because they found a way to increase accuracy from 99.9% to 99.99%, with only a mere 100-fold increase in execution time.

"I have HAD it with these muthu-f$#%in' cursors in my muthu-f$#%in' database!"