WEBVTT
NOTE
duration:"00:39:03.7680000"
language:en-us
NOTE Confidence: 0.957596898078918
00:00:00.370 --> 00:00:06.880
Thank you all for joining me today for sentry one tools for productivity and performance.
NOTE Confidence: 0.907055199146271
00:00:08.020 --> 00:00:28.030
As you can kind of see from this image. I am not quite Kevin Kline. Unfortunately had to leave the conference early due to a family emergency so you. Get me instead. I compressed Kevin Kline. My name is Andy and I work for century. One as a solutions engineer. So I work with our product on a daily basis. But I always like to point out on my bio slide is that I'm at.
NOTE Confidence: 0.922462284564972
00:00:49.620 --> 00:01:09.630
I'll tell you what it's saved my bacon numerous times, so it was, I was very, very honored one century. One approach me to join their ranks. The other key detail here is my email address. I love helping people out. I love answering questions. So if you ever. Find yourself with a question whether you be an existing customer or a new customer.
NOTE Confidence: 0.901926577091217
00:01:10.420 --> 00:01:30.430
Please feel free to contact me, I'm happy to help out. However, I can. And I would like to take a quick poll of the room are there any existing customers in the room right now 10 wow OK about almost almost half of you also welcome thank you for being customers century one. And for the other half of the room welcome as well. I'm glad to have you here to learn a little bit more about?
NOTE Confidence: 0.843813419342041
00:01:31.220 --> 00:01:34.460
And some of the things that we can offer alright enough of that.
NOTE Confidence: 0.910487413406372
00:01:35.400 --> 00:01:55.410
So most people know us by our legacy, which is SQL century. We were one of the primary and premier monitoring tools for on. Prem traditional SQL servers, but a lot of people don't necessarily realize there knows that we can monitor much, much more than just on premise SQL server? How many people.
NOTE Confidence: 0.845492422580719
00:01:56.200 --> 00:02:16.210
AS analysis services anyone a couple of hands excellent well, we can monitor that we can also monitor Azure. So Azure SQL database. VMS up in Azure. We also have support for Azure. Managed instance as well, and the other cool thing that we can do is how many people here are virtualized other on VM Ware.
NOTE Confidence: 0.895122945308685
00:02:17.870 --> 00:02:37.880
OK, but a third of the room well. We can also give you visibility into that virtualization layer to your host layer. So 'cause sometimes. You deal with certain issues with your virtualized SQL servers that aren't really an issue with that. SQL server, and I'll show you a little demonstration on that later and of course. We are the makers of plan. Explorer, which we are most well known foreign if you've never dealt with.
NOTE Confidence: 0.902221739292145
00:02:38.670 --> 00:02:58.680
But you do deal with execution plans spend some time to tear them apart. But this is not the entire picture as well. In addition to the product that I just talked about that. We are most well known for last year. We expanded our product suite by acquiring a company called pragmatic works software so now we're able to offer software so.
NOTE Confidence: 0.897599458694458
00:02:59.470 --> 00:03:19.480
For the entire development stack. We call this database. Devops so for example, you have a couple of development tools up here. For example, task factory, which offer some extra components for SSIS development. In addition to that we offer a solution as far as the Bystock is concerned for example by.
NOTE Confidence: 0.93476676940918
00:03:20.270 --> 00:03:26.040
Can give you logging capabilities within your SSIS packages that are not typically available to you?
NOTE Confidence: 0.897115349769592
00:03:27.570 --> 00:03:47.580
From an administration perspective, one of my favorites is Doc Express.' How many people would have ever had to create a data dictionary of their SQL server environment of all their different databases. I see a couple of hands being raised. I've had to go through that nightmare. Myself Doc Express' allows you to hook into one of your systems and actually will trace the entire map out the into.
NOTE Confidence: 0.871203422546387
00:03:48.370 --> 00:04:08.380
Ecosystem of your database SSIS packages that feed databases into it and you can do things such as analyze your data linneage. How many times have you had a table and you wonder? What populates that specific column? Why we can now show you through Doc Express' that always comes in from this SSIS package, which comes in from this other data.
NOTE Confidence: 0.829686105251312
00:04:09.170 --> 00:04:29.180
And so forth and you can see the list of all of your data and then finally the other favorite of mine's ledger test? How many people here, actually test their data. We test our application code right. But how many people actually test their data. So ledger tests offer you some fantastic capabilities about that now.
NOTE Confidence: 0.900150835514069
00:04:29.970 --> 00:04:49.980
15 minutes or so 45 minutes or so when I'm probably primarily going to be focusing on is SQL sentry sentry. One N monitoring your SQL servers with the core product suite, but I'm going to try and take things beyond just monitoring SQL server in a traditional fashion right so this is our goal for to.
NOTE Confidence: 0.916454017162323
00:04:51.120 --> 00:05:02.530
So I did an informal poll amongst my friends and I asked him you know what are some of the common things that people typically think about when they're monitoring their SQL servers so here's the top 5.
NOTE Confidence: 0.924135088920593
00:05:03.480 --> 00:05:11.870
Tracking disk space usage so I don't know about you guys, but back when I was a DBA sometimes we would run into running out of disk space challenges and things like that.
NOTE Confidence: 0.890522241592407
00:05:13.260 --> 00:05:25.200
Tracking job progress, many of us use SQL server agent Johns to manage various tasks within the businesses and whether it be ETL jobs or even just your check TV's, and your backup jobs.
NOTE Confidence: 0.926656067371368
00:05:26.600 --> 00:05:32.270
Getting alerts we all want to know when something goes sideways when something blows up. We need to be alerted to that.
NOTE Confidence: 0.919491946697235
00:05:33.850 --> 00:05:43.380
Seeing physical resource utilization well. Hey, I'm I'm running really hot at 90% CPU or I'm under a tremendous amount of memory pressure. We need to know that type of information.
NOTE Confidence: 0.902540802955627
00:05:44.400 --> 00:06:04.410
Or the most common thing is Hey somethings on fire. What's going on right now our users are the first alert and we didn't get an email. It's our users that are now complaining to us that something is going on now here's. The thing sentry. One can do all of this, but my goal is not to show you that we can do all of this is just showing that we can do something be.
NOTE Confidence: 0.886462509632111
00:06:06.480 --> 00:06:24.740
So the agenda for today. We're going to go through 4 different chapters. I called there told in this form of a little story. If you will and the first story is called managing all the things the second story is what's stressing out my SQL server. The third chapter? Is It Sequel Server's fault.
NOTE Confidence: 0.881273567676544
00:06:25.750 --> 00:06:45.760
And the fourth is helping you get more sleep at night, OK, so this is our agenda. So hopefully you guys are all eager to see all this stuff, so let's dive right on in managing all the things so here's our scenario. Hopefully those are you are operational dbas are taking backups of your database.
NOTE Confidence: 0.893470048904419
00:06:46.550 --> 00:07:06.560
How many people are actually testing your backups by restoring them somewhere else and running check DB against them because for all you know you're taking backups for your backups may be corrupt. So really what you should be doing is taking taking re stores of them and one common methodology is to back them. Up to like this and or wherever you typically back them up to and then have a junk server.
NOTE Confidence: 0.881584525108337
00:07:07.350 --> 00:07:27.360
Responsibility is to read those backup files and then restore them to a database run. Check Debian as long as nothing blows up drop them and move along then that's it's sold existence for sole reason for existing so imagine having to manage SQL server agent jobs across numerous different servers. You know how many times do you have to?
NOTE Confidence: 0.893641591072083
00:07:28.150 --> 00:07:48.160
Video you're opening up things like you know the activity monitor for example, across multiple different SQL servers. What if you could get visibility into all of that in one single pane of glass and that's what I'm going to show you or what about timeboxing those jobs right you. Maybe you can't run all those jobs simultaneously century. One can also act as a job scheduler for you and that's why.
NOTE Confidence: 0.285646229982376
00:07:48.950 --> 00:07:49.740
First.
NOTE Confidence: 0.46442386507988
00:07:50.980 --> 00:07:51.460
So.
NOTE Confidence: 0.896191537380219
00:07:52.400 --> 00:08:12.410
We have a feature called event chains, So what you see with each of these different boxes each of these different boxes are actually SQL server agent jobs existing agent shops, but they're all on different SQL servers and that's the kicker here, So what we have first. Here's a SQL server backup job on London's equal 11 of my.
NOTE Confidence: 0.903704106807709
00:08:13.200 --> 00:08:33.210
This is the job that will kick everything off once it completes you will see the 2 different arrows and what that says, is that then we will now kick off 2 new SQL server agent jobs and those exist on 2 completely different servers. These are just existing SQL server agent jobs. You don't have to create anything new we can change your existing jobs.
NOTE Confidence: 0.850917875766754
00:08:34.000 --> 00:08:54.010
Going to run backups on wind sequel, Wannan when SQL 2 simultaneously whenever when SQL one completes wind. Sequel 3, will fire off. I apologize about the scaling what is I'm having some issues with the projector unexpected issues with the projector, but neither case win SQL 3 will kick off whenever when.
NOTE Confidence: 0.867434799671173
00:08:54.800 --> 00:09:14.810
Independently then it will be changed to kick off when a SN when AS tab over here and will kick off. Those 2 backup jobs simultaneously when all of them. Finally, complete, then Redmond SQL, 03 will be kicked off here, so in this fashion. I'm not. I don't have to do too.
NOTE Confidence: 0.89113062620163
00:09:15.600 --> 00:09:35.610
Don't have to make sure that this job typically runs in 2 hours. I'm going to give a 30 minute buffer and then have this next job fire off independently. I can just let century one manage all of that, so this might be useful. If you have multiple ET else. For example, feeding into a single data warehouse and then maybe after that all those details complete maybe.
NOTE Confidence: 0.886142611503601
00:09:36.400 --> 00:09:52.230
Faster processing job well, what happens if one of those details run long and that's scheduled job processing job. You know suddenly kicks off all the TTL isn't done you're in trouble. You gotta reset that processing job or something like that in additional all that.
NOTE Confidence: 0.949158549308777
00:09:54.440 --> 00:09:58.740
We can give you visibility into these different jobs.
NOTE Confidence: 0.92851722240448
00:10:00.150 --> 00:10:02.050
Through something called the event calendar.
NOTE Confidence: 0.902503728866577
00:10:08.380 --> 00:10:28.390
So, in this outlook style format. I can now show you all the different jobs. I have executed so you can see in this visual visual methodology that these are the This is how long each of the different jobs have run the green status bar shows me that these jobs have all succeeded I can see how their overlapping or and hopefully not overlapping one another and I can even get.
NOTE Confidence: 0.905697286128998
00:10:29.180 --> 00:10:49.190
Invisibility into job status information for example, let me zoom in you know, Min and Max runtime and things of that nature. So we can do some really cool. Things around that and again this is again think about in management studio, having to open up the activity history on safe across 6 or 7 different servers that's really cumbersome this.
NOTE Confidence: 0.901177406311035
00:10:49.980 --> 00:11:06.140
In one single pane of glass for all of all of those different SQL servers. So this is a great way to be able to approach managing a wide variety of different agent jobs across multiple different SQL servers. Alright so that concludes the first demo and the first story.
NOTE Confidence: 0.936322331428528
00:11:08.850 --> 00:11:09.590
All right.
NOTE Confidence: 0.905976533889771
00:11:11.170 --> 00:11:31.180
So story number 2? What stressing out my SQL server, so I don't know about you. But this is a pretty common scenario when you come into work and Hey business user comes up to you. Before, you can even have that first Cup of coffee in the morning. It starts complaining about overnight processing ran really, really slow nothing blew up so nothing failed and that's why you?
NOTE Confidence: 0.907935321331024
00:11:31.970 --> 00:11:48.340
Sure, anything or an alert but it's still run really, really slow so it started. It's delaying the start of our business day so how do you find out what happened in the past? I was asleep I was in bed? I wasn't I'm not up 24/7 monitoring my SQL servers, so let's talk a little bit about that.
NOTE Confidence: 0.906708836555481
00:11:52.280 --> 00:12:12.290
So for folks who are using a DIY scripts or things like that. SP, who SP who is active. You can only see what's going on right now, but with a monitoring solution like century. One we can go back in time. We can go back in history. So what I'm showing you right now, it's just our performance advisor dashboard, which gives us.
NOTE Confidence: 0.922834098339081
00:12:13.080 --> 00:12:33.090
Now, what's going on on the server level of this SQL server instance and then on the on the right hand side now my left hand side on the right hand side. We're seeing information about what's going on in SQL server any nice summarized fashion right now, we collect data once every 10 seconds. So we offer you a tremendous amount of granularity with very, very little overhead.
NOTE Confidence: 0.923275113105774
00:12:33.880 --> 00:12:53.890
The nice thing, though, is up here in the upper area. I can change up the start and end dates to whatever I want them to be. I'm not limited to the last 30 minutes of activity, so let's for example. Go back to well, I don't know I'll go back to about midnight or so and it's about 7 AM on this server 'cause. This is East Coast server, so let's take a wider perspective of activity.
NOTE Confidence: 0.353595525026321
00:12:54.680 --> 00:12:55.090
Overnight.
NOTE Confidence: 0.870471060276031
00:12:56.150 --> 00:12:58.320
So this dashboard will refresh momentarily.
NOTE Confidence: 0.910997033119202
00:12:59.980 --> 00:13:08.530
And now you see that I have this wider perspective of my activity and Hey take a look. Let's you see that around 3 AM, or so.
NOTE Confidence: 0.889208316802979
00:13:09.450 --> 00:13:29.460
I had this interesting spike in SQL server weights right around here, well, let's pretend that our nightly processing typically starts at around 2:30 in the morning and typically runs for a couple hours. Well, what's up with this? What's up with this spike in SQL server weights. Let's take a closer look so. This is where I'm simply just going to highlight this period of.
NOTE Confidence: 0.902482986450195
00:13:30.250 --> 00:13:38.340
So this is again think of it as I'm taking that 50,000 foot view. Now I'm going to zero in on a 500 foot view of the activity that has occured.
NOTE Confidence: 0.894794225692749
00:13:39.550 --> 00:13:59.560
So now that I'm looking around my Daschle run correlating some of the different information here. I'm highlighting I'm tagging one of the areas of my dashboard. The SQL Server Weight chart, but notice that every single other char was also highlighted over that same period of time. This is a simple characteristic. But one that I argue is very powerful because it allows me to quickly visually correlate activity together.
NOTE Confidence: 0.885849237442017
00:14:00.350 --> 00:14:03.450
I want to sample my SQL server waits for example.
NOTE Confidence: 0.782996773719788
00:14:04.500 --> 00:14:05.960
I will see command zoom it.
NOTE Confidence: 0.884727358818054
00:14:07.200 --> 00:14:20.550
There we go I will see that, Hey, we had a high volume of 6 pack in weights well. Hey. That means we have some parallelism going on so a lot of people will immediately need shuriken immediately jump to Oh, Well, you must have been under CPU pressure, but
NOTE Confidence: 0.910169303417206
00:14:21.450 --> 00:14:41.460
If I look over here, sure we had a spike in CPU activity, but really we didn't get that. We didn't even hit 50% in CPU activity. Not much really happened, there, so you know, there are other tools out there that for example, we give you your top 10 weights. But those can be misleading. If you don't look at the other metrics and this is an example of that.
NOTE Confidence: 0.874386668205261
00:14:42.250 --> 00:14:46.200
Let me undo this and then zooming over here.
NOTE Confidence: 0.901371538639069
00:14:47.880 --> 00:15:07.890
On the other hand, if I'm looking at some of my other key metrics. Let's take a look at SQL Server Memory Utilization and I see the buffer pool. Utilization suddenly take a drop. We evicted a bunch of pages out of the buffer pool. My page life. Expectancy also took a nosedive and in conjunction with that. We had a big spike in page read fault switch me.
NOTE Confidence: 0.907854676246643
00:15:08.680 --> 00:15:28.690
Read a bunch of data off of disk by correlating these multiple symptoms together. Now I'm actually seen that. You know what whatever code. No, it's running during this period of time, seemingly did a lot of IO unexpected. IO because it was pulling a lot of data off a disk that wasn't already in the buffer pool, So what time should be really curious about now is not necessary.
NOTE Confidence: 0.873894810676575
00:15:29.480 --> 00:15:49.490
Well and potentially put me under CPU pressure, but unexpected queries that consumed a lot of IO that maybe should not have been running during this period of time, so how do we find that out. That's very easy with century. One so again. I'm just going to highlight. This period of time and then I'm going to make use of the right click functionality and I'm going to.
NOTE Confidence: 0.902072429656982
00:15:50.280 --> 00:16:10.290
In this case, I'm going to jump to top sequel. I would love to zoom in on this. But I want to write you zoom in on this. The the right click context menu disappears. But as you can see. I have a lot of different options. I can move back to say my event calendar. For example, and see did I have any long running jobs that could have interfered with this, or I can move over to my blocking tab to see if I had any blocking queries during my.
NOTE Confidence: 0.858130395412445
00:16:11.080 --> 00:16:16.030
But in this case. I want to know about the workload so I'm going to pull up a tab called top sequel.
NOTE Confidence: 0.904125094413757
00:16:16.960 --> 00:16:36.970
We have 4 different tabs of information inside top sequel, but the tab that I'm going to focus on for today's demonstration is completed queries, which is meant to show you information about your individual heavy hitting long running queries So what we're showing you up at the top or individual queries that have a duration of 5 seconds or longer, and as you see that I'm scrolling to the right we have very.
NOTE Confidence: 0.905485153198242
00:16:37.760 --> 00:16:47.120
Data points, about them. CPU utilization duration. The number of reads when they started. It ended so on, and so forth things that I would expect to see.
NOTE Confidence: 0.83258330821991
00:16:48.190 --> 00:16:52.800
So I'm going to start by reads scroll on up to find my top read consumers.
NOTE Confidence: 0.913393437862396
00:16:54.590 --> 00:17:14.600
So here are my top read consumers. But here's. The thing looking at this information in this raw form is actually not that effective because I'm looking at individual executions of identical pieces of code so How do I get a sense from a workload perspective from a server level perspective? What was
NOTE Confidence: 0.877600848674774
00:17:15.390 --> 00:17:24.290
Resource consumers So what I'm actually going to do is use something called show totals, which will aggregate everything together and then I re sort by total reads.
NOTE Confidence: 0.917474627494812
00:17:31.860 --> 00:17:39.000
And now I see from a total re perspective 1615 and 14000000 logical reads between.
NOTE Confidence: 0.905648171901703
00:17:40.850 --> 00:17:48.890
These 3 pieces of code 12 and 3, but these are run in volume as I scroll all the way to the right.
NOTE Confidence: 0.912706792354584
00:17:51.470 --> 00:18:11.480
The top 2 statements were actually reports and they were run 17 times each in this very short time frame. Why was someone up at 3 in the morning with insomnia, earning a report 17 times over well that obviously interfere with my nightly processing. Let's find out who you know who you know who was doing. That was it. Bob the analyst who happen.
NOTE Confidence: 0.910125434398651
00:18:33.070 --> 00:18:47.050
I'm talking through this entire process, though I have been able to drill into this information and find this root cause and probably 2 minutes or less. So how is that for fast triage? How is that for fast rapid root cause analysis? Think about that?
NOTE Confidence: 0.897031903266907
00:18:48.170 --> 00:18:51.340
Alright so that concludes that little story.
NOTE Confidence: 0.891124904155731
00:18:56.610 --> 00:19:16.620
Is it sequel server's fault so one of the common things that I hear in fact? I think should be hearing. This a number of times, just during the course of this conference is that you know dbas would come up to me and say you know, we're always were always the first ones to be blamed whenever something is going wrong with my SQL servers right and we always had to figure out is it something that SQL Server's fault.
NOTE Confidence: 0.903626918792725
00:19:17.410 --> 00:19:37.420
Elses fault so we have a couple of different ways within century, one that we can show you that type of information right so because again sometimes the goal is not necessarily defined the problem within SQL server is trying to eliminate SQL server as a culprit and point the finger somewhere else. I know pointing the fingers. Sometimes, bad, but sometimes we do.
NOTE Confidence: 0.884605586528778
00:19:38.210 --> 00:19:41.640
But it's not here it's over here, so you need to prove that.
NOTE Confidence: 0.907634556293488
00:19:45.340 --> 00:19:51.270
So for this, I'm actually going to first go back to the dashboard and I have a fantastic example of this right here.
NOTE Confidence: 0.895933747291565
00:19:53.520 --> 00:20:13.530
This is a simple example. But notice in the CPU chart right here. We have color coding utilization here where I had this dark green and this light green. This dark green actually represents SQL server CPU utilization. However, this light green represents non SQL server. CP utilization so this is.
NOTE Confidence: 0.90739917755127
00:20:13.610 --> 00:20:33.620
A very simple example of how we can show you that. Hey, you know resources are being used in a way that I didn't expect so you know a very common scenario might be well. Sorry Bob did junior DBA. You know remote it into that production machine fired a management studio and started doing some work locally on the production SQL server. You know, there's been cases where people have done that or
NOTE Confidence: 0.887506246566772
00:20:34.410 --> 00:20:54.420
And decided to turn on antivirus accidental E and is running antivirus on your SQL server or here's a crazy story. This is actually a true story? Where we were working with the prospects. The Dbas are being blamed for CPU heavy CPU utilization on their SQL servers in the middle of the night. They brought us in to take a look around.
NOTE Confidence: 0.911555945873261
00:20:55.210 --> 00:21:15.220
That was a rogue sys admin at their company had installed Bitcoin mining software on their SQL servers to run in the middle of the night 'cause. It was very clear when we just glanced at the CPU chart that your CP utilization by SQL Server was maybe 10%, but the rest of the 80% less something else, so we dug into the processes.
NOTE Confidence: 0.887417435646057
00:21:16.010 --> 00:21:34.220
We saw this unknown EXE we did some more digging and we found it. It was Bitcoin mining software. I wish I was making this story up and I'm not and yeah. We had a very interesting conversation, there after let's say that and someone soon did not have a job there. After we were good to go, though, so that's just one example.
NOTE Confidence: 0.876484811306
00:21:35.470 --> 00:21:39.730
A second example, let me go over to this other SQL server over here.
NOTE Confidence: 0.896424531936646
00:21:40.640 --> 00:22:00.650
So let's pretend you're looking at a sequel server. You're being told that there is a lot of performance issues on a wind sequel, one in this particular case and you're looking through things you. You spend some time on the dashboard. You spend some time in tops equal and you just can't find anything that's wrong well. Remember how I ask you guys who here is virtualized and a number of you raise your hands well.
NOTE Confidence: 0.885818123817444
00:22:01.440 --> 00:22:21.450
SQL server also happens to be virtualized as you can tell in the upper corner up here. This is a VM. Ware VM but notice something more important VM Ware host and this is a hyperlink because we have a secondary product called VI century? Which hooks into V sphere, which gives you visibility to your VM host.
NOTE Confidence: 0.889606654644012
00:22:22.290 --> 00:22:32.100
Because I wanted to be nice to know if something else is going on in the host layer that is potentially causing you issues. So let me show you so in order to find that out. All I have to do is click this button here.
NOTE Confidence: 0.917055785655975
00:22:34.500 --> 00:22:54.510
And now I can see Resource Utilization on the host level, so on the left hand side. This is now host level, Physical Resource Utilization and on the right hand side instead of SQL server level utilization. I'm now seeing VM specific. Utilization now broken up by different color codes, so for example, and if I were to mow.
NOTE Confidence: 0.899379312992096
00:22:55.300 --> 00:23:15.310
A nice tool tip as far as which VM is actually causing the issues so for example. I see you know, my CPU. Utilization all around here is pretty study. But right at 5 AM maybe when my users are complaining I suddenly see a spike in CPU realizations in one of my VMS. Maybe I did not expect that for example.
NOTE Confidence: 0.895007967948914
00:23:16.100 --> 00:23:36.110
Well, this is an example of where I can mouse over right click. I see that. That's with VM SEAG 5. I happen to be monitoring that VM with century one. So now I can jump down to that dashboard. So this is an example of not just moving laterally throughout your environment, but vertically up to the host level and then.
NOTE Confidence: 0.899336397647858
00:23:36.900 --> 00:23:56.910
Because many of us deal with noisy neighbor. VM scenarios right so now I can very clearly see that we had this spike in CPU utilization on this server, and now we can go about troubleshooting this SQL server using methodology that I talked about the form. So this isn't a simple example of like I said being able to say you know what it's not SQL server that's going.
NOTE Confidence: 0.887680649757385
00:23:57.700 --> 00:24:17.710
Or maybe it's something else is going on the VM layer. Or maybe it's a non SQL server. VM that suddenly got V motioned over to this host because of some other failover and now that's causing the issues to be imagined be able to have that information and be armed with that information so that then you can approach the VM team and say, Hey guys, I know what's going.
NOTE Confidence: 0.893755078315735
00:24:18.500 --> 00:24:35.700
Causing my SQL server issues. You need to do this rather than going to your VM. Ware admin, saying something is going on, but I don't know what to find out you know, so it creates much clearer communication. It build stronger bridges and bonds just by having that extra level of visibility into your environment.
NOTE Confidence: 0.939830660820007
00:24:37.770 --> 00:24:38.490
All right.
NOTE Confidence: 0.92698335647583
00:24:42.260 --> 00:24:45.610
Helping you get more sleep and now we actually have 2 stories in here.
NOTE Confidence: 0.887554883956909
00:24:47.330 --> 00:25:07.340
So first of all nightly ETL failures. This is actually based upon something that I used to have to deal with back when I was a DBA at a very, very, very large bank so we had this detail job. It was a very old very old fragile package and clients would upload these files on a nightly basis, but sometimes custom.
NOTE Confidence: 0.88685941696167
00:25:08.130 --> 00:25:28.140
Uploading the files right so the tail job would still kick off at 2 in the morning but the file would exist. It would try and be consumed up because it was in the process of being uploaded. It was incomplete, so the ETL job blew up, but the thing is the process was fragile is such that we couldn't magically just blindly restart the restart the package we had to go.
NOTE Confidence: 0.903509318828583
00:25:28.930 --> 00:25:48.940
Run some code to clean up and reset state within our database before we could kick off that job again. So Unfortunately this was a process that could not be done by an operator a developer had to be paged and woke up in the middle of the night and this was very unpleasant. I wasn't I wasn't even an operational DBA at the time, but because of the way our businesses were.
NOTE Confidence: 0.89971661567688
00:25:49.730 --> 00:26:08.640
This this responsibility fell to the developers, Unfortunately fell to me and this happened, 2 to 3 times a week. So imagine getting woken up on an almost regular basis at 2:15 in the morning just to run the same set of code and restart the job and make sure it runs again. So what if you can automate all that away wouldn't that be awesome. Let me show you how.
NOTE Confidence: 0.897305727005005
00:26:11.020 --> 00:26:31.030
So, in century one we have a very robust, alerting methodology in alerting system, such that we, we use. The term conditions an actions. I'm just resetting some of my panels here. So when a condition is met a corresponding action is fired and that's what you can see over here on the right hand side.
NOTE Confidence: 0.896621882915497
00:26:31.820 --> 00:26:51.830
A list of the global conditions in corresponding actions now, what people most typically think about from an alerting system is getting an email. Or maybe getting a page or using an SNMP trap and we can do that. That's pretty mundane. Anyone can do that. But on the other hand, you see another interesting one down here execute process or execute job.
NOTE Confidence: 0.915511548519135
00:26:52.870 --> 00:27:12.880
What century one has the capability to do is that when he condition is met instead of just sending an email we can execute some sequel, some power shell or an agent job for you, or a windows process. We call this automated responses so that way instead of you being forced to wake up and just do that. Same exact routine, which you know exactly what you need to do you can have?
NOTE Confidence: 0.900502920150757
00:27:13.670 --> 00:27:15.970
And let me show you how this works.
NOTE Confidence: 0.895968079566956
00:27:17.070 --> 00:27:37.080
So what I'm going to do is I'm going to go down to one of my specific SQL servers, Redmond SQL 03 because I don't want this to occur in all of my SQL servers in my entire environment. Only one of my SQL only one of those SQL servers happens to have this job so in my conditions and actions tab here, you see that I have 2 agent job failure overrides defined.
NOTE Confidence: 0.890403389930725
00:27:37.870 --> 00:27:57.880
Going to execute SQL, one is going to send some email, the first agent job failures going to execute some SQL and then the actions tab here, you see exactly what's going to be executed in this case, it's just a stored procedure to clean up some staging tables and then I'm going to re kick off that job. But How do I know which job I'm going to do this on I don't want this to occur on?
NOTE Confidence: 0.851071298122406
00:27:58.670 --> 00:28:06.370
Shop filter that occurs on this server right so we need to focus that and that's what this next tab is about condition settings.
NOTE Confidence: 0.89138275384903
00:28:08.860 --> 00:28:28.870
So the condition settings is my predicate for this specific condition so in this case. I've set up 2 different predicates that my message text of the failure must contain the name of my job and in this case. I'm also going to set up the owner must be the service account that runs my ETL job because maybe sometimes the dbas have to manually run the ETL job or business user.
NOTE Confidence: 0.892776966094971
00:28:29.660 --> 00:28:49.670
So when this service account does it automatically at 2 in the morning right so this is how it automated that away so now I can execute that's equal and get more sleep at night so that's great that mitigates a good 8090 percent of the Times that this detail job fails, but there's other times when this detail job fails that are less common, but they still happen like wow.
NOTE Confidence: 0.86691814661026
00:28:50.460 --> 00:29:10.470
I was sent up to me, I don't want this to run an infinite loop. I still want someone to be woken up eventually so how can I manage that that's where the second ETO agent job failure. I send an email action comes into play so over here. I'm going to select this one. The condition settings are identical so I'm not going to bother zooming in on that.
NOTE Confidence: 0.885669231414795
00:29:11.260 --> 00:29:18.540
To send an email, but I don't. I'm not going to set it to send an email every single time. I'm going to use something called a response rule set.
NOTE Confidence: 0.90854012966156
00:29:19.810 --> 00:29:39.820
With the response rule, said you can set up account and or time based rule set to throttle or control the action as opposed to the action occuring every single time a condition is met so in this simple example. I'm going to say only send an email. If the condition of this job failed occurs 3 times in the span of 16.
NOTE Confidence: 0.910813391208649
00:29:40.610 --> 00:30:00.620
That tells me that I know that my restart code has attempted to restart 3 times over but it's failed 3 times in a row in the Saints in the same short period of time. So now I know I have a bigger problem on my hands now. I want human intervention now. I want someone to get page to take a closer look at this so I've now have a method.
NOTE Confidence: 0.903955817222595
00:30:01.410 --> 00:30:21.420
Who built into this process but otherwise I've been able to automate away that very common occurrence that nuisance occurrence of having to just deal with it late customer file as opposed to say a corrupt customer file or something even worse. So this is just a simple example of how you can do some really fantastic alert tuning and automation within century one.
NOTE Confidence: 0.908568263053894
00:30:21.440 --> 00:30:26.000
Rather than just getting a brute force, email every single time something breaks.
NOTE Confidence: 0.907268583774567
00:30:27.310 --> 00:30:28.010
All right.
NOTE Confidence: 0.89721405506134
00:30:34.260 --> 00:30:54.270
Business value so here's the thing there should be no question that a tool like sentry. One can give you operational value. It can help you out, you know managing your SQL servers and keeping up to date on what's going on, but what about bringing business value to your organization into your infrastructure? Did you ever think about that, or did you ever have to deal with the manager.
NOTE Confidence: 0.901362538337708
00:30:55.060 --> 00:31:15.070
Chain that says you know what I don't. I'm not going to give you money to buy tools because that's what I'm paying you for right. But what if you could tell that that finance person that CFO that you know what though this tool can also solve some business problems for you and give you direct ROI to your business and not just operational well, we can do then I'm going to show you.
NOTE Confidence: 0.914299190044403
00:31:18.660 --> 00:31:20.840
So let me just change out my windows once again.
NOTE Confidence: 0.934528708457947
00:31:21.900 --> 00:31:41.910
We have something called advisory conditions which allows you to create your own custom conditions within century. One so instead of just you know can conditions like you know a job failing or something like that. You can actually create custom scenarios based upon any combination of say, Wmi query.
NOTE Confidence: 0.886367440223694
00:31:42.700 --> 00:32:02.710
For evaluation query against data that we already collect for you in store now repository database or regular T SQL query so because you can create advisor conditions around regular T SQL queries start thinking about anytime that you ever had to create a diagnostic query. That didn't look at like Master, a DMV, but looked at an application database.
NOTE Confidence: 0.846102058887482
00:32:03.500 --> 00:32:07.360
Nation state so here's a simple contrived example.
NOTE Confidence: 0.901363790035248
00:32:09.860 --> 00:32:29.870
So I'm just going to go to the documentation portions so you can see the the query in question. This is just a query against product inventory. I'm just looking at adventure works. Let's pretend we're manufacturing company right so in this case whenever product inventory falls beneath a certain threshold for different widgets. I want to be notified about this because this is something important to the business.
NOTE Confidence: 0.876251578330994
00:32:30.660 --> 00:32:50.670
I can do something very simple like set up a send email action to maybe not email, the DBA. DBA doesn't care. But maybe I can email Bob who is the manager of the factory that that these widgets go to that warns Bob that. Hey, you're going to run out of widget. XY and Z in 6 hours or imagine I can say execute some power shell.
NOTE Confidence: 0.884268522262573
00:32:51.460 --> 00:33:11.470
Instead, automatically cut me a ticket that goes to procurement that will automatically file that ticket in procurement to say you need to go order more XY Z so this is another way that you can now automate business. Floanne workflow so that's 1 simple example. Here's a different example and this one is based upon a real customer.
NOTE Confidence: 0.90318089723587
00:33:12.440 --> 00:33:32.450
We have a customer that's a call center for many, many different companies right and they were using a third party vendor application. So they're pretty stuck in and trapped in how this vendor application works. Well, what this vendor application. Did is that it managed to call queues for these different customers and the operators assigned to each of the different color cues, Unfortunately applique.
NOTE Confidence: 0.898761928081512
00:33:33.240 --> 00:33:53.250
That if one of those customers got flooded with calls someone would have to first notice and complain and then someone else would have to run some T SQL code in order to actually correct it to reallocate additional operators back to that call. Q that suddenly got flooded so when they brought on century. One and they learn about this functionality what they did was that it created it simple.
NOTE Confidence: 0.889933347702026
00:33:54.040 --> 00:34:14.050
That check their cultures once a minute and it was something you know it's pretty simple. Just like this. And if any other color cues exceeded a certain threshold. They ran a custom stored procedure using an execute SQL action to automatically reallocate operators so at all times were keeping an eye on their application database for them and then automatic.
NOTE Confidence: 0.91291344165802
00:34:14.840 --> 00:34:34.850
Sing the operators assigned to those colors on a constant ongoing basis. We solved a massive business problem for them because that third party vendor, was not able to do that for them, but we were able to do that and that brought them tremendous ROI. Imagine being able to do something like that, and solve a business problem in your environment. This is why I like to encourage people to think.
NOTE Confidence: 0.89953625202179
00:34:35.640 --> 00:34:55.650
Creatively about scenarios that perhaps you have to deal with under regular basis. One story that I used to have to deal with I used to work for a sass company and this website application had a master actions table that log all of the users actions throughout the day and if user started using a certain portion of our application.
NOTE Confidence: 0.908585369586945
00:35:17.240 --> 00:35:37.250
To the application developers and to the business owners warning them. That Hey, we're going to start running issues were trending in this direction, so that this way. They were pro actively made aware that Hey, we're about to hit a trouble point here, so again this is an example of not being reactive, but proactive imagine being able to do so.
NOTE Confidence: 0.914250731468201
00:35:38.040 --> 00:35:43.860
Right so that's the kind of capability that we can give you with our customer learning capabilities.
NOTE Confidence: 0.944682002067566
00:35:46.690 --> 00:35:47.580
All right.
NOTE Confidence: 0.915496349334717
00:35:48.770 --> 00:36:08.780
So, just to recap I'm hoping that you know the stories that I just shared with you over the last 3040 minutes were able to give you a better appreciation. That's monitoring SQL servers goes beyond just some of those basic things that we all universally think about and hopefully you get a better sense of how century.
NOTE Confidence: 0.92220801115036
00:36:09.570 --> 00:36:20.560
In greater capabilities, then just simply you know looking at performance counters and things of that nature and that we can offer you not only operational value, but business value as well.
NOTE Confidence: 0.905623733997345
00:36:21.880 --> 00:36:41.890
So I do want to leave you with some final parting thoughts first of all we have a relatively new offering called SQL Sentry Essentials. There are some companies out there that are relatively small SQL server enterprises that have small footprints. Maybe only a few instances, and you know, sometimes they have very, very tight budgets. So we've now are offering a new we have a new offering call extension.
NOTE Confidence: 0.907796204090118
00:36:42.680 --> 00:37:02.690
Basically is a subset of the features that I showed you and it's at a much lower price point. It does have a hard limit of 5 instances, but this allows those small companies to potentially now invest in a fantastic offering such a century. One whereas maybe the price point with such for our enterprise product that maybe it wasn't you know, so real.
NOTE Confidence: 0.890075027942657
00:37:03.480 --> 00:37:11.160
So if you if you're ever in that situation before come back and take a look at the SQL sentry essentials. Maybe that can fit your needs now.
NOTE Confidence: 0.866425275802612
00:37:12.330 --> 00:37:32.340
And I want to leave you some additional parting thoughts and I kind of touched upon this earlier on that. We're more than just SQL server that we, we can cover things like your virtual machines and your virtual estate that we can give you visibility into SAS nsis using things like by Expressen Task Factory and be I sent.
NOTE Confidence: 0.899167418479919
00:37:33.470 --> 00:37:53.480
Documentation now this is one that I wanted. I spoke a little bit about as well. Early on, so again remember that, we have some of those cool capabilities and if you want to learn more about that. Please come visit us in the booth. Another colleague of mine is a subject matter expert on Doc Express' and he has a demo of that available and finally we also have sent.
NOTE Confidence: 0.857174754142761
00:37:54.270 --> 00:37:57.770
The that again can help you test your data.
NOTE Confidence: 0.90857195854187
00:37:59.400 --> 00:38:19.410
So with that I want to say thank you. Here's a few things to kind of a call to action. One of the bonuses. You have for coming to this session is that if you reach out to Kevin Kline. We will give you a one year seed license to some of our different products. Now I'll be honest. I don't know the terms and conditions. Kevin knows that information. But if that's something.
NOTE Confidence: 0.878516614437103
00:38:20.200 --> 00:38:40.210
Contact Kevin and he can hook you up, but otherwise if you weren't interested in a more in depth demo of our products. Then you can go online and schedule. A demo with one of us will be working online with an engineer such as myself or you know, many of you know, Rich Douglas, who's our UK solutions engineer my.
NOTE Confidence: 0.899675071239471
00:38:41.110 --> 00:39:01.120
But otherwise yes come visit us in the booth if you want to learn some more. We'd love to have some extra discussions with you and finally if you actually are interested in wanting to book a demo now. My colleagues are back there, and Ian Donica. They can actually book a demo for you right now as well. So otherwise if there are no other questions or if there are questions. Please come visit me, but other.
NOTE Confidence: 0.937512636184692
00:39:01.910 --> 00:39:03.740
For your time I really appreciate it.