We are back in the keynote room with Adam Jorgensen talking to us about the financial state and health of the PASS organization. It’s currently very healthy with over $1 Million in reserve.

Thomas LaRock is back and thanking those folks who are rolling off the PASS board and have dedicated so much time and PASSion. Sri Sridharan is thanked for his amazing work in helping volunteers take on bigger challenges and being successful.

Denise McInerney talks about how she got involved with the PASS community and her journey of how it helped her become so successful in her career. This leads up to the awarding of the 2014 PASSion award. There is only 1 of these given out every year to a volunteer who has gone even more than above and beyond for the organization. Andrey Korshikov from Russia is announced as the winner. Congrats Andrey!

Denise announces the PASS Business Analytics Conference and also the dates for the next PASS Summit back here in Seattle.

Dr. Rimma Nehme is coming on stage to talk about Cloud Databases. She talks about where she is from and how she got where she is as MS Grey Labs today. She is explaining what cloud computing is, as a computing service that is available from anywhere, at anytime, and is always on. Cloud computing is an on demand service, location transparent, and rapid delivery. First she is explaining cloud elasticity that features quick and easy deployment. She is showing some very cool pictures on Microsofts physical cloud infrastructure (datacenter). Everything is stored in shipping containers (18 wheeler trailers). She shows all the servers and how they look on the inside of the container. A very cool look behind the scenes.

Now she is explaining how they get all the power to these DCs that they need and how they cool them with “swamp cooling”. She is now explaining software as service using the analogy of pizza as a service. A great and easy to remember explanation.

Dr. Nehme moves on to explaining virtualization and the cloud. The analogy here is roommates having to share a bathroom with a single sink and how now get a large sink to service everyone. We now move on to multi-tenancy and the tiered approach and options. She now explains SLAs. 3 nines gives you 8.76 hours of down time a year and 4 nines gives you 52.56 minutes of down time.

SQL DB in Azure is explained as we look at it from the software perspective. We have an the infrastructure layer (hardware), platform layer, services layer, and client layer.

Dr. Nehme is now talking about how we still need DBAs even with all the automation and simplicity of the cloud. She encourages us take our skills, add cloud, and give ourselves a new title as “Cloud DBA”.

This morning I will be live blogging the PASS Summit 2014 Day 1 keynote so those at home can follow one.

Thomas LaRock hits the stage and welcomes everyone to the Summit. He thanks Microsoft as the makers of the best data platform on the planet along with all the volunteers and leaders in this wonderful community. He is giving us a “state of union” of PASS with amount in chapters, virtual chapters, and training hours provided.

2oo hands-on workshops are being taught and provided by Microsoft and the ability to take certification exams are available this week. The SQL Server Clinic is also available which is an area staffed by Microsoft Customer Service and Support and where you can get your technical issues addressed directly from Microsoft.

This morning’s keynote is being presented by Ranga Rengarajan, James Phillips, and Joseph Siroch from the Microsoft Data Platform. We are now watching an introductory video where these 3 executives are explaining their roles in the MS Data Platform group and what areas they support.

Ranga says he has 2 daughters and tells them he is getting the industry ready for them as he encourages them toward the tech industry. Ranga is talking about how he likes maps and how the map availability has changed over time due to data. He is talking about how data drives so many things in our lives and how things have changed because of it. Ranga shows all the things that the Microsoft Data Platform can handle all in 1 engine. This includes things like scale out, scale up, in memory, on disk, OLTP, and data warehouse. Some of the other aspects he points out are capturing diverse data with Azure DocumentDB and Hadoop. We can achieve elastic scale with SQL 2014 scaling to 640 logical cores and up to 1TB of RAM per VM, SQL Server on virtual machines in Azure, and Azure SQL DB.

Ranga welcomes Tracy Daugherty from Pier 1 imports to show how they use some of the new Azure services for their business. His demo is showing how they are trying to sell decorative pumpkins during the holiday season and how they can dynamically display seasonal items based on inventory. This lets them know what they need to push from a sales perspective and how Azure allows them to do all this dynamically and without code changes. He is now showing us how they can use elastic scale to handle the additional load during the seasonal promotions. The last thing he is showing is Azure Geo Replication and how easy it is to add a replica across the country.

The reigns are handed back over to Ranga as he shows some customers that are using some scale out features. He features StackOverflow and some of the interesting things that they have done and we are now watching a video about how Dell uses various features to scale out and and scale up their infrastructure. In memory has had a big impact for them and they are happy with the ease of deployment in SQL 2014 and how their workloads have performed better on the platform without any changes.

Now that we have seen everything MS has done Ranga is going to talk about what they are doing in the future. They have a major update coming to Azure SQL DB including many larger indexes, parallel queries, extended events, and in memory. Unfortunately with the announcement the crowd is silent.

Ranga welcomes Mike Zwilling to the stage also from Pier 1 Imports. He shows how you can now use in memory and columnstore at the same time on the same table. Mike is now showing how you can take an on-premise table and stretch it into Azure. We see how you can still query the data while it is being migrated. He is simulating a crash of the local server and how long it takes to restore a 150GB database which only takes seconds and then synchronizes back with the data in Azure and maintaining transactional consistency.

Joseph Sirosh is welcomed to the stage, VP of machine learning and information management. He is going to have the audience do the wave while shouting PASS…Community…Rocks. He is talking about the Azure machine learning platform and how we can use the data from machines to analyze and make predictions. We are seeing how Pier 1 Imports can use MS Kinect as a sensor to see where customers are spending time in their stores. They are using Azure to show how they are pulling the data in. They now basically have SSIS in cloud and through the browser where you can pull from multiple data sources. We are now seeing the real time data coming from the Kinect sensors through Azure streaming and it appears that candles are the hot item of the day. Sanjay has his phone plugged in so we can hear the real time predictions for his Pier 1 Shopping experience. He now asks his phone to find cocktail glasses and the application finds the product and tells us what department it is in. It also brings up a map to show us where to find it in the store.

James Phillips the GM of Data Experiences comes out to tell us about some of the Power BI offerings.

I’m going to close it out as I head out to talk about AlwaysOn in room 6E, which will be live streamed on PASStv. If you are here then I hope you’ll join me and if not I hope you’ll join me online at 10:15 PT.

I can’t believe Microsoft has chosen me for this award and to be included in such a prestigious program. There are only a little over 70 SQL Server MVPs in the United States and a little over 350 world wide.

For my readers who are unfamiliar with the program here is how Microsoft describes it.

Microsoft Most Valuable Professionals, or MVPs are exceptional community leaders who actively share their high-quality, real-world deep technical expertise with the community and with Microsoft. They are committed to helping others get the most out of their experience with Microsoft products and technologies.

I presented a session titled “SQL Server AlwaysOn Quickstart” on September 10th for the 24 Hours of PASS event. You can view the session recording HERE if you missed it. The session was a preview of the full session I will be presenting at the PASS Summit 2014 this November. I will be presenting the full session on the first day of the Summit, November 5th, 2014 at 10:15am PST.

The session is right before lunch which means I don’t have to hurry out to make room for the next presenter since there is a 2 hour lunch break. You came to the Summit to learn about SQL Server and get your questions answered so I’ll stay after the session for as long as it takes to answer every question you have. If I don’t have the answer then I’ll find it for you and I really mean that. Ask anyone who I promised to followup with after one of my sessions.

Make sure to add my session when you’re using the Schedule Builder to plan your Summit sessions! Now on to the questions I received during the 24HOP session.

The FCI needs shared storage and the AG needs local storage. I’m confused on how they can be implemented together.

The nodes in your cluster can have both shared storage and local storage. The AG will use the local storage (could still be SAN attached just not shared) and essentially mirror the DBs on all nodes. The FCI will use the shared storage and might not even be installed across all nodes in the cluster. They are absolutely supported together. Come see my session and I’ll show you how. Click here to download one of my presentations showing you a sample architecture.

I noticed you did not add the system databases to your availability group, is there a reason for that?

Yes. Just like mirroring system databases are not eligible and cannot be added to an AG.

If SQL 2012 supports 1 primary and 4 secondary and SQL 2014 supports 1 primary and 8 secondary, how many are read/write?

Only the primary is read/write. All secondary replicas are read only and you can disable that functionality or limit it if you want.

Are the replicas synchronized?

For a replica to be considered “synchronized” it must be in Synchronous Mode. You can only have a maximum of two replicas in Synchronous Mode.

Is it common to use a SQL FCI as a primary replica and another SQL FCI as a secondary replica for protection and still take advantage of AGs for reporting purposes?

With the exception of the reporting part this used to be common prior to AGs. You would have two 2 node FCIs in different data centers and mirror between them. This gave you HA in both data centers and DR if an entire data center went down. However, this is not so practical with AGs. You would be more likely to have 2 replicas in each data center with an AG across them and no FCIs.

How would you handle getting stored procedures and SQL Agent jobs onto the secondary replicas?

You don’t have to worry about stored procedures since those are stored in the database and will be replicated automatically. SQL Agent jobs will have to be re-created on the secondary replicas and additional code added to exit gracefully if the replica is currently acting as a secondary. You’ll want them to run automatically if the replica becomes the primary. See THIS POST for more information on automating logins and thanks to Robert Davis (Blog|Twitter) for writing the code.

How do you handle installing SQL Service packs after AGs have been implemented?

First make sure you have backups (That’s always rule #1). You’ll want to update a secondary replica first, make sure it is in synchronous mode, and then fail over to it. Now you can upgrade the remaining replicas and fail back to your preferred replica when done.

Does SELECT @@SERVERNAME return the listener name or the node name?

It returns the node name. However, older versions of SQL Server (for an FCI) will return the listener name. The best way to get the node name no matter what version you are on is to SELECT SERVERPROPERTY(‘ComputerNamePhysicalNetBIOS’)

Wouldn’t it be possible to have an indexed view on a secondary replica to use with reporting off of that replica?

You cannot create database objects on secondary replicas separate from the primary. You have to create them on the primary and they will sync to the secondary replicas. Remember that secondary replicas are READ ONLY and can never be written to.

With AGs in automatic failover mode how does the client connection timeout have to be configured?

Your applications need to be written to retry the connection. You should also add MultiSubnetFailover=true to the connection string. You should consider reducing the TTL of the listener name. I’ll write another blog post explaining this in more detail, but that is the short answer.

What is the advantage of having a group of databases in a single AG?

If you have several databases that support a single application, if any one of them fails then they all failover together. If you were mirroring them it was possible that myDB2 could failover to the mirror while myDB1 and myDB3 continued to run on the primary. That would break the application, but AGs mitigate that risk factor since they stay together within that logical construct.

Can you have replicas in different domains?

No. All replicas have to be part of the same Windows Cluster which requires that all nodes be in the same domain. As a previous AD guy I’ll take this a step further and point out that in most situations things are fine to be in different domains so long as they are trusted somewhere in the forest hierarchy. That is NOT the case here.

All servers in our environment are in a cluster of 8 nodes. Would that be okay ,or do you still need to create another cluster inside that cluster?

You cannot create a cluster within a cluster. In this case you don’t need to do anything since you already have a cluster, you’re one step ahead of the game.

I learned a hard lesson when I first started as a DBA. Although to be honest, it’s not a lesson I should have had to learn (I’ll keep you in suspense). I work for a large Fortune 100 company and as with many companies our size, there are many processes in IT.

I was in charge of a database for an instance of Microsoft Operations Manager 2000. If you have ever supported that database you already feel my pain, but trust me there are much worse out there. I digress. I needed a backup plan and with MOM backing up the database is not enough. You also have to backup your management packs if you changed anything from the default. In addition, any custom management packs would also have to be backed up. I wrote a script to export and copy those to another server daily. I’m very glad I had this in place because it saved my bacon in the end.

So what about the database? We had a piece of backup software that the server folks put on all servers to take care of backups. It’s been long enough now (~12 years) that I don’t even remember what the software was at the time. They asked if we had databases on the server to make sure they got them backed up and assured me that the entire OS drive would be backed up as well. All was well with the world. We went through the typical deployment phases of procuring the hardware, getting it racked, getting it connected and configured on the network, and installing the OS. Once the platform was there and ready to go we began working on the middleware pieces. I won’t get into the MOM component architecture, but we did have two separate servers one each for the MOM Management database and the MOM Reporting database.

Part of the middleware piece is making sure everything third party is installed and working correctly. The backup software was obviously one of those. It installed without issue and the backup folks got successful backups. Off we went to the races and the deployment went smoothly.

Fast forward about a year and I get an alert that the MOM Management database went into suspect mode. Not something I wanted to see, especially as a new DBA and being SQL 2000. I did some searching on the internet and at the time the results were sparse. I attempted a DBCC CHECKDB REPAIR without luck and even resorted to a DBCC CHECKDB REPAIR_ALLOW_DATA_LOSS which also got me no where. Keep in mind that this is a MOM alerting system, so losing some data was not a big deal and would have been faster than a restore. Otherwise, I would not have entertained that option at this point.

So what was next? Restore the most recent backup of course! We called the backup guys and told them to restore the most recent backup. They responded and said, “What backup?” Very funny guys, just let me know when it’s done. No, we don’t have a backup of that at all. In fact, we have no record of ever backing it up.

That will make any day a bad day. It’s also why I keep emails, like the one where they confirmed successful backups. I’m now completely up the creek without a paddle. I had to rebuild the entire MOM environment. It took me all night, but I managed to rebuild everything. I was so thankful that I had scripted out exports of all the management packs or there was no way I could have done all that in one night.

Ever since then I have always done my own backups whether there is a backup team doing it as well or not. I tend to back things up to my DR server and I often setup a round robin scenario where server 1 backs up to server 2 and server 2 to server 3 and server 3 back to server 1.

A tough lesson in trust to learn, but it has not and will not ever happen to me again!

Many of you may already know that I am a PASS regional mentor for the South Central region of the United States. My role is to help user group leaders obtain the tools they need to have successful groups and provide an open platform to discuss challenges and successes. One of the groups in my region asked me about how to handle the monetary part of hosting a pre-con for their upcoming SQLSaturday. My response was to explain three different methods that I have seen others use. What I wanted to do was use estimated costs to give a clearer picture of what everyone walks with at the end of the day.

Option 1 – This option takes all costs out of the gross profit prior to splitting the net profit.

Gross Profit – Venue Cost – Lunch Cost – Travel Costs = Net Profit

Net Profit is split 50/50

Let’s put some numbers into that equation to better understand what the speaker and UG would stand to profit. I usually see most precons set at $125. Some will give an early bird of $99 and then raise it to $125 at a certain date, but we’ll keep the math simple. Let’s assume we get 15 people to attend at $125 each. That gives us a gross profit of $1,875. Finding a free venue for a pre-con is far easier than that of the SQLSaturday, but we’ll assume it costs $100 for cleaning and facility services. We’ll also guess that lunch is $15 a person and would cost a total of $225. The last part is travel and we’ll guess 2 nights in a hotel is $240 (technically it should be 3 days, but the third is for the SQLSaturday so we won’t count that) . Flight costs can vary and I’ve personally paid as little as $200 and almost up to $400 so we’ll go with $300 to be in the middle. Let’s plug those numbers in and see what we get.

1875 -100 -225 -540 = 1010

1010/2 = 505

Both the speaker and the user group will have a net profit of $505.

Option 2 - This option has the speakers pay for their travel, but the profit is split 75/25 favoring the speaker. If your user group is a larger or more financially sound group this is an excellent option. From a user group standpoint all you really want is enough money to help with meetings. For the smaller groups a SQLSaturday pre-con can fund an entire year of meetings.

Gross Profit – Venue Cost – Lunch Cost = Net Profit

Net Profit is split 75/25

1875 -100 -225 = 1550

25% of 1550 = 387.50

75% of 1550 = 1162.50

With this option the UG gets $387.50, but the speaker still has travel costs so subtract $540 from his profit and he gets $622.50.

Option 3 – This option is what most folks used to do and I suspect many still do. However, most groups opt for one of the options above these days. I suspect the reason this option has lost popularity is because it doesn’t attract speakers with the lower profit margin. Look at the numbers and you’ll see what I mean. If the flight or hotel cost was higher or the speaker had to rent a car (which I did not even factor in, but is extremely common) then the speaker would be lucky to even break even and might even lose money.

Gross Profit – Venue Cost – Lunch Cost = Net Profit

Net Profit is split 50/50

1875 -100 -225 = 1550

1550/2 = 775

So here the UG gets $725 and the speaker after travel costs gets $235.

I hope this helps for anyone out there hosting a pre-con to at least be able to plug some numbers into these options to make a more informed decision for your user group. The other thing to factor in is that most SQLSaturday events that host a pre-con actually host two or more sessions. That means the profit doubles or triples for the user group, but the speaker profits from above will remain.

So these are the 3 options I have seen used, but I’m curious if anyone else is using a different method. If so, please post it in the comments for everyone to see. I’m also curious what method your group is using and if this post made you think about changing it.

I’ll be doing a precon for SQLSaturday #331 in Denver Colorado on September 19th, 2014 titled “A Day of High Availability and Disaster Recovery” . If you can make it, I would love to have you in the class. We are going to cover backups, windows clustering, AlwaysOn Failover Cluster Instances, AlwaysOn Availability Groups, and more. This class will take each of those technologies in a progressive order to build on each other. At the end of the day we will have a single solution built out in virtual machines on my laptop that use all of those technologies to build a comprehensive and real world high availability and disaster recovery architecture. Click the button below to register and I have included the session abstract as well.

Let’s spend a day looking at several High Availability and Disaster Recovery solutions for SQL Server. We’ll start off with a solid foundation as we look at backups and how to both configure and performance tune them. After we build our foundation, we’ll take a look at SQL Server Mirroring and why it’s still an important tool in our tool box. Since all of the HA/DR solutions we will be looking at either sit on top of or can be combined with Windows Failover Clustering, we’ll learn how to setup and configure a windows cluster to host these solutions. Next we’ll build on that platform by looking at SQL Server AlwaysOn Failover Clustering Instances. Last we’ll dive into how to setup SQL Server AlwaysOn Availability Groups. Once we have a firm understanding of all these technologies we’ll see how they all work together by discussing a case study together and developing a comprehensive solution. Here’s what you will learn:

SQL Server backup types SQL Server recovery models How to design a backup plan How to performance tune your backups for free How to configure mirroring Mirroring configuration tips to increase throughput Setup and Configure a Windows Failover Cluster Discover how to properly configure quorum to support the AlwaysOn feature set Setup and Configure a SQL AlwaysOn Failover Cluster Instance Setup and Configure a SQL AlwaysOn Availability Group

UPDATE

I thought I would post an update after the event. The class went great and we had some excellent discussions! 5pm came up on us really fast and although I had covered all the material, I had some more demos. I offered to stay and go through them and about 7 of the 15 attendees decided to hang around. I kept going for another hour before we got kicked out of the facility. It was a fantastic day and I had an absolute blast. I had added a case study for us to go over and see how many ways we could solve the business requirements, but we simply didn’t have time for that. The feedback that I got was excellent and not a single negative thing was mentioned, in fact here is a testimonial from one of the attendees.

This past weekend was SQL Saturday #331 Denver. I always enjoy SQL Saturdays, but I was especially looking forward to this weekend because I was going to attend a SQL Saturday Pre-Con for the first time! The pre-con was fantastic! The Denver SQL Server User Group did an outstanding job, and it was a wonderful day of learning and networking.

I attended Ryan Adams’ session A Day of High Availability and Disaster Recovery. Ryan did an excellent job with this all-day session. You could tell that his agenda was carefully planned to build upon each module. Ryan’s first module was an in-depth tour of backups and restores, pointing out that these concepts were foundational to all High Availability (HA) and Disaster Recovery (DR) solutions. In his next module, Ryan expanded on backup/restores as he introduced us to mirroring. Ryan then walked us through the concept of Windows Server Failover Clustering (WSFC). Culminating in his module on AlwaysOn Failover Cluster Instances and Availability Groups, Ryan tied all the concepts together providing a holistic view of HA/DR. I would definitely recommend this all-day session to anyone looking to expand his or her knowledge of HA/DR. To find out more about Ryan Adams, you should check out his blog at ryanjadams.com.

I’ll be doing a precon for SQLSaturday #324 in Baton Rouge Louisiana on August 1st, 2014 titled “A Day of High Availability and Disaster Recovery” . If you can make it, I would love to have you in the class. We are going to cover backups, windows clustering, AlwaysOn Failover Cluster Instances, AlwaysOn Availability Groups, and more. This class will take each of those technologies in a progressive order to build on each other. At the end of the day we will have a single solution built out in virtual machines on my laptop that use all of those technologies to build a comprehensive and real world high availability and disaster recovery architecture. Click the button below to register and I have included the session abstract as well.

Let’s spend a day looking at several High Availability and Disaster Recovery solutions for SQL Server. We’ll start off with a solid foundation as we look at backups and how to both configure and performance tune them. After we build our foundation, we’ll take a look at SQL Server Mirroring and why it’s still an important tool in our tool box. Since all of the HA/DR solutions we will be looking at either sit on top of or can be combined with Windows Failover Clustering, we’ll learn how to setup and configure a windows cluster to host these solutions. Next we’ll build on that platform by looking at SQL Server AlwaysOn Failover Clustering Instances. Last we’ll dive into how to setup SQL Server AlwaysOn Availability Groups. Once we have a firm understanding of all these technologies we’ll see how they all work together by discussing a case study together and developing a comprehensive solution. Here’s what you will learn:

SQL Server backup types SQL Server recovery models How to design a backup plan How to performance tune your backups for free How to configure mirroring Mirroring configuration tips to increase throughput Setup and Configure a Windows Failover Cluster Discover how to properly configure quorum to support the AlwaysOn feature set Setup and Configure a SQL AlwaysOn Failover Cluster Instance Setup and Configure a SQL AlwaysOn Availability Group

I am excited and humbled to be speaking at the PASS Summit for my second year in a row! There were 943 submissions and only 144 spots, so I am really excited and honored to be a part of the largest and most amazing SQL Server conference in the world. They are expecting around 5,000 people this year in Seattle from November 4-7.

The current full conference price is $1595 and considering the speakers are consultants and folks like you and me that deploy these technologies in the real world every day, it’s worth every penny. However, today (June 27th, 2014) is the last day to register at that price. It goes up by $300 to $1895 tomorrow. I have more good news. You can contact your local chapter or virtual chapter to get a discount code worth $150 off. The codes are good at the current price and after the price hike. There will be more price hikes as the conference date gets closer.

I’m President of the PASS Performance Virtual Chapter so if you are looking for that code, or don’t have time to get one from your chapter, you can use this code:

VCSUM23

At the Summit I will be presenting a session titled “SQL Server AlwaysOn Quickstart”. I would love to see you in my session. Even if you have seen me present this session before, I’ll be making some changes and adding content since I have more time. It is a beginner level introduction, but I talk about every aspect of the technology. Here is the abstract so you know what to expect, and I hope to see you there!

In this presentation I’ll explain what the SQL Server AlwaysOn high availability and disaster recovery solution is all about. I’ll talk about the different levels of protection it provides through Windows Clustering, SQL Clustering, and Availability Groups. We’ll discuss how these three things come together to protect your databases. We’ll finish with a dive into availability group configuration, the new capabilities it gives us, and what’s new in SQL Server 2014.

The Kerberos Configuration Manager is a really handy tool that can make configuring Kerberos a much easier task, but it’s got a nasty little bug. Configuring Kerberos can be tricky. In what way you ask? Here is a short list of just some of the things you have to consider.

What is my SPN supposed to look like?

Should I let SQL Server handle registering my SPNs or should I do it manually?

Are there special considerations if I’m running a failover cluster?

Where does the SPN go?

Do I have permissions to add/change/remove SPNs and if not what is the permission I need to request?

Do I need delegation enabled and on which account?

That’s the short list, but you don’t have to worry about all that stuff because you have the Kerberos Configuration Manager, right? Not so fast.

This tool works great for most scenarios, but if your environment has a split DNS, multiple domains, or multiple DNS name spaces you better take a second look at the SPNs it suggests. Many DBAs are not familiar with these concepts so let me give a real simple explanation and then we can work through an example. A split DNS is where hosts are resolvable in more than one namespace. This is fairly common when companies merge or buy out other companies.

Let’s say the company “Movie Studio A” uses the DNS namespace of movies.com and all hosts are registered in that domain. When they create their active directory they decide to call it MovieStudioA.com. They join all their servers to the domain and use movies.com as their primary domain suffix. Server1 is now resolvable as server1.movies.com and server1.moviestudioa.com. The problem here is if you open a command prompt and ping “Server1″ it will resolve it as server1.movies.com because of the DNS suffix.

Kerberos will only work with the Active Directory domain name and NOT any other resolvable DNS namespace. Unfortunately the Kerberos Configuration Manager makes SPN suggestions based on how the client machine resolves the server name you input. What it should do after resolving and contacting the server is get the domain it is joined to and correctly build the FQDN, but that is not the case.

Let’s see what it looks like. I have a server called Server1 that is joined to the stars.com domain. The FQDN is Server1.stars.com. This server is also registered in another DNS namespace called DNSOnlyDomain.com. Here is what the Kerberos Configuration Manager says the SPNs should be.

Kerberos Configuration Manager with bad SPNs

Those SPNs are not correct because they are from a DNS domain and not the Active Directory domain. They should reflect the Active Directory domain and look like this.

MSSQLSvc/server1.stars.com

MSSQLSvc/server1.stars.com:1433

If you are using the Kerberos Configuration Manager make sure you know what Active Directory domain your server is joined to so you can identify if the suggested SPNs are correct.