Finding the Login Which Deleted SQL Server Agent Job

There are many objects that are part of one SQL Server instance, which include databases, Logins, SQL Server Agent Jobs, features like MDW, etc. As a SQL Server DBA, we are responsible for managing all these objects and part of it is to ensure security of all these objects and avoid unauthorized access, as it can result in unexpected situations. For example, some unauthorized user gains access to SQL Server database and will be able to read critical data and even worse can update or delete it, which can be very dangerous. There are many different ways to secure the data starting from restricting permissions to users to using encryption at data and transport layers too. Even, after with so many restrictions, still there will be users with high privileges who can access the SQL Server instance and can further make changes.

Of of the issue we recently faced in our environment was someone deleted a SQL Server Agent job and up on checking internally no one has come forward with the responsibility. So, it was required for us to identify the login which performed the deletion of the SQL Server Agent job. Unless, there is already some monitoring in place, we may not be aware of the job being removed and will know on a later data and we may not have the backup of msdb database which has the job details and makes things more complicated. For someone to be able to delete the SQL Server Agent job, the user would require very high permissions like SysAdmin or ServerAdmin or permission on MSDB for managing jobs. If there is no prior auditing enabled on SQL Server instance, then there are very limited chances to identify the login which deleted the job. Below are some of the ways which can help identify the user who deleted the job and other details like deleted date, etc.

– SQL Server Default trace – This will store some important information, and also includes objects creation/deletion, so there are chances to find the details in default trace. But, as we know default trace gets removed after it reaches certain size and reaches certain number of files, so if we find that job was deleted after some days, then the data might have already been gone when the older default trace gets removed.
– Check if any SQL Server audit feature is enabled, which is the easiest way to find the details.
– Check MSDB tables for SysJobHistory to see if any information can be found, but mostly it is not possible, as history gets removed when job is deleted.
– If there is any monitoring tool which captures details about the jobs, then check that to see if you can find any information about job deletion.
– Check if there are any other traces running on the SQL instance for some other purpose, possible that it may have captured job deletion details too.
– Check SQL Agent history to see if you can find any information there.

If you could not find the details even after checking all the above, the most likely you will not be able to find it anymore and forget about it and recreate the jobs. Make sure to enable auditing to be able to find the details in case the issue happens again in future. If you do not have script of the job, then restore backup of msdb database as a new user database and get the job details from sysjobs, sysjobschedules, etc tables. For more convenience in future, setup a job to capture script of all instance level objects to backup share, which can be useful in cases like these where logins, jobs, etc are deleted from the SQL instance.

With codeplex shutting down, we have moved SQL Nexus to github with a new release (6.0). Now both Pssdiag/SQLDiag manager and SQL Nexus are on github. Where to get it As you navigate to SQL Nexus, you can download code and released binary files. If you choose to download binary files, you can go to...

SQL Server : large RAM and DB Checkpointing Hi everyone, This post’s purpose is to establish a summary of the specific behaviors with relation to DB Checkpoint that may happen within SQL Server when running with a large quantity of allocated memory and when applicable, how to best address them. SQL Server 2016 improves...

Recently we got an inquiry from a customer who received the following message in errorlog and wanted to know why. [INFO] HkDatabaseTryAcquireUserMemory(): Database ID: [7]. Out of user memory quota: requested = 131200; available = 74641; quota = 34359738368; operation = 1. This is my first time to see this error. As usual, I relied...

Recently I assisted on a customer issue where customer wasn’t able to alter a memory optimized table with the following error Msg 41317, Level 16, State 3, Procedure ddl, Line 4 [Batch Start Line 35]A user transaction that accesses memory optimized tables or natively compiled modules cannot access more than one user database or databases...

In a previous blog, I talked about memory optimized table consumes memory until end of the batch. In this blog, I want to make you aware of cardinality estimate of memory optimized table as we have had customers who called in for clarifications. By default memory optimized table variable behaves the same way as...

I worked on an interesting issue today where a user couldn’t restore a backup. Here is what this customer did: backed up a database from an on-premises server (2008 R2) copied the file to an Azure VM tried to restore the backup on the Azure VM (2008 R2 with exact same build#) But he got...

Recently, I looked an In-Memory OLTP issue with Principal Software Engineer Bob Dorr who is still my office neighbor. After restoring a database that had just one memory optimized table, we dropped the table. Even without any memory optimized tables,number of checkpoint files keep going up every time we issue a checkpoint. For a while,...

In this blog Added per-operator level performance stats for Query Processing, Senior PM in QP talks about extending operator level performance stats. They include stats related to reads, CPU and elapse time. These are very helpful to track down query performance issues. We worked on recent case where we put ActualElapsedms in a good...

In blog “Importance of choosing correct bucket count of hash indexes on a memory optimized table”, I talked about encountering performance issues with incorrect sized bucket count. I was actually investigating an out of memory issues with the following error. Msg 701, Level 17, State 103, Line 11There is insufficient system memory in resource pool...

I was working with a customer to troubleshoot memory optimized table issues. In this scenario, our customer uses a memory optimized table variable. He put 1 million rows of data into the table variable and then process it. Based on what he said, I tried to come up with a repro to see if...