If you do not see anything interesting, don’t worry, just browse this place and you will find something there. Also, the category tree is always available on your left. Should you think that this blog needs an article about something cool, don’t hesitate to contact me.

With kind regards,
O.K.

Share this:

It is the usual thing in the tech world to start any product evaluation with the installation of the product and composing a blog post about the installation experience. We wait for new product releases, expect new awesome features and, sometimes, just “cannot wait”. I have been somewhat SQL Server addicted for a long time (more than 15 years so far), and of course, could not restraint from diving into new features. So, today I’m sharing my notes on most noticeable things that grabbed my attention at the very start of my journey – SQL Server 2016 Installation Process.

1. SQL Server 2016 Installation guidance

As usual, the installation process is very well documented at MSDN and TechNet (though the SQL Server section is no longer called “Books Online”), and I had no need to search for any blog posts to resolve any issues. All error and warning messages have links which take you to relevant troubleshooting articles. The only thing I can complain about – it is not obvious how to copy a link from the error dialog (hint – use Ctrl-C).

2. Tools are NOT included

This can be noted at the very start of the process – the installation center contains two extra link – one for SQL Server Management Tools and another for SQL Server Data Tools

At the same time, features do not list any management tools (neither complete nor basic) in the list of shared features. As for me, I do not regret of not having them there – there is absolutely no reason to copy unnecessary bits to every single server.

3. Some features are open-source dependant

Yes, they are! Considering Microsoft’s aggressive offer for those who use Oracle, this message looks a little bit surrealistic:

This computer does not have the Oracle Java SE Runtime Environment Version 7 Update 51 (64-bit) or higher installed. The Oracle Java SE Runtime Environment is software provided by a third party. Microsoft grants you no rights for such third-party software. You are responsible for and must separately locate, read and accept applicable third-party license terms. To continue, download the Oracle SE Java Runtime Environment from http://go.microsoft.com/fwlink/?LinkId=526030.

4. There are manual post-install steps

I’ve got this warning because of “Advanced Analytics Extensions” feature:

Last, but not least

My overall impression on SQL Server 2016 installation process is quite positive: all warnings and error messages are quite informative, the process is well documented, and the user experience hasn’t changed in comparison with previous releases. Of course, some can complain about the old-school design of installation dialogs, but for me, it really doesn’t matter.

What’s next? Cannot wait to dive into new and promising features. SQL Server 2016 Reporting Services is at the top of my list.

Share this:

Microsoft gets faster and much more agile than before when it comes to interesting SQL Server downloads. There is no need to mention that many interesting tools and “side applications” will follow the announcement of SQL Server 2016. Here are just few I have spotted this week:

Share this:

Information technology has always been full of surprisingly contradicting beliefs and every market, product or community has its own FAQ list or Top 10 Myths whitepaper. This week brought another “myth case” to my desk. Though it has been around for several years already, it is still hot. While my fellow database developers are busy completing another data warehousing project (“traditional” relational solution, by the way) for a travel firm, our marketing department approached me with the discussion of how we can define our new data warehousing offering. The question and concern was: “Hasn’t big data killed data warehousing already?”

The question seems tricky and provokes for diving into architectural details, pros and cons, which solution better supports data intake or business analytics or interactive visualisation. I have to confess, I’m not the saint, so I started with categories which the mind of database professional dictates – reads and writes efficiency, scalability, data consistency, data query technologies. The list kept growing, but was not taking me any closer to the answer. I spent some time trying to sort out differentiators for each technology, but with no success (Technology is the key word here, so remember it and continue reading.)

The reason why I failed to produce a good comparison is quite simple – my database pro’s brain assumed that the term “data warehouse” is equal to “relational data warehouse”. We know that relational data warehouse (or “traditional” data warehouse, as some marketing whitepapers say) are in fact relational databases, which host structured data. But what if we remove “relational” from the equation? What does “data warehouse” mean then? Can we have non-relational DW?Continue reading…

Share this:

Background

Data warehousing is not a new thing today. The concept was first introduced in the 1970s and its key terms “dimension” and “fact” appeared even earlier – in the 1960s. Since then, many businesses have successfully implemented and adopted various data warehouse solutions. Though they were using a great variety of technologies, processes, and ways of thinking, their goals were alike – consolidating data from scattered operational systems, making data clean and trustworthy, extracting the information, and unlocking hidden knowledge. All this was necessary to improve business decisions, to make them knowledgeable, rather than based on blind-guesses.

Many organizations from various industries – from finance to hospitality, from healthcare to gambling – leverage the benefits provided by this several decades old concept. But technologies evolve and brings new methods of data processing, new algorithms and implementations, new features and new possibilities. The amount of data available for analysis grows dramatically. The speed of communication increases. Thus businesses face new challenges – they need to cope with a highly competitive environment which is much faster than before, they need to evaluate the situation in a much more accurate manner, they cannot wait.

In recent years, a new trend in data warehousing has emerged, many companies are looking for ways to improve their existing solutions, which currently are:

Hard to maintain. Some base technologies are outdated and will be not supported in a matter of months, some key persons may have already left the company and (in worst cases) some custom source code was lost;

Slow. Well, maybe not slow, but not fast enough. Business users complain that they spend too much time on waiting for that “key report required by regulations”;

Not functional enough. The business community cannot proceed with “this simple kind of analyses” because “the data warehouse is not designed for that”.

Though all these reasons sound valuable and business-justified, it is needless to say that in most cases there are many people who are afraid of any changes in their data warehouse and show significant resistance. There is no surprise here, DW is considered to be the informational heart in many businesses and (we think) most people are afraid of heart surgery. (Only cyborgs without heart, we believe, do not).

It becomes very important for IT departments to show and prove that changes to the corporate data warehouse will be not surgery, but therapy; that it will be done in a qualified and controlled manner; that all actions are planned and risks are mitigated.

Share this:

I started this post with the very simple thing – copied and pasted title from my previous post. And immediately got the very strange feeling of having a huge debt. Not a financial of course (which I try to avoid), by blogging and writing debt. It is weird and ridiculous to publish a survey result one year later! I had a choice of either moving the survey to the trash or confessing that my blogging debt is really huge 🙂

So, not only this post present results for the SCOM Reporting survey, but also should be considered as another official relaunch of my blogging activity.

Survey results

Do you use SCOM reporting features?

Yes, both buit-in reporting and Azure Operational Insights

17%

Yes, only buit-in reporting

77%

Yes, only Azure Operational Insights

0%

No

6%

Are you satisfied with your reporting experience?

Yes

6%

Almost

38%

Doubt

28%

Almost useless feature

8%

Noooooooooo

21%

Generic vs. Product-specific vs. Self-service?

Generic (i.e. reports that work for data collected by any MP)

28%

Product-specific (i.e. reports designed for specific MP)

26%

Self-service

45%

Which kind of report do you need most often?

Availability

70%

Most common alerts

55%

Performance/performance details

70%

Events

26%

Configuration

25%

Product specific

30%

Other

9%

Why do you need reports?

Troubleshooting

60%

Planning

62%

To make my manager happy

66%

Other

11%

How often do you use reports?

Every day

25%

Several times per week

30%

Once per week

26%

Once per month

15%

Once per quarter

4%

What are the most common reporting issues?

No data

36%

Incorrect data

13%

Incorrect aggregation

25%

Missing features

38%

Too slow

38%

Bad usability/user experience

53%

Execution errors

15%

Other

11%

Do you need a self-service reporting option for SCOM?

Yes, I want it on-premise

72%

Yes, I want in the cloud

4%

Yes, I use Azure Operational Insights already

4%

No, I don’t need that

17%

Other

4%

Your role in organization?

IT Pro (SCOM Admin)

66%

IT Pro (Not SCOM Admin)

6%

IT Manager

6%

IT Executive

6%

Consultant

15%

MP Developer

2%

Conclusion

As for me, the user perception of the SCOM reporting feature is very clearly described by numbers and, unfortunately, it is very far from “good enough” marks. However, the survey took place more than 1 year ago, so I hope that with release of SCOM 2016 and SQL Server 2016 the situation will improve. I haven’t tried these products yet, so take this as my personal expectation, not promise of any kind.

These days I work together with many top-notch developers, dealing with very different projects, solutions and applications (not only with SCOM management packs :)). And what I like most of all about being in this very diverse community is the variety of questions whose folks bring onto the table. Here is the most recent one:

Can we store configuration information for our application in AD? If yes, then how?

I was dealing a lot with the Exchange Server during last months, so for me the answer for the first question was obvious – yes, you can store the configuration information in the Active Directory. Finding the answer for the second question was tricky – I hadn’t enough knowledge about implementing this kind of stuff. So I made a dive into the theory and ended with a bunch of links and some samples written in PowerShell. Here it goes.

Disclaimer

All samples provided here are provided As Is. You may use them at your own risk.

Note, that any changes you apply to AD Schema are not reversible – consider testing any changes in the lab first.

All samples are written in PowerShell, so both IT Pros and developers can use them. (I do believe that professional developers can read the code in any language and are able to easily convert PowerShell samples into C#.)

Some theory

Many modern applications have multi-tier architecture. To be able to act as a whole, some application components might need to share configuration information with other ones. Exchange Server is a good example of such application – the typical deployment includes a number of servers, holding different roles (client access server, mailbox server, edge transport), distributed across entire organization.

The Active Directory suites that need in a very good way – everyone within organization can access the information when required and permitted, also AD infrastructure is usually highly available.

There are zillion ways to store the required information and it is up to software developers to decide how exactly they want to implement that. Here are some common points of consideration:

What information should be stored?

Who should be able to access the information?

Is that information sensitive?

Where do we need that information (anywhere in the forest, anywhere in the domain, selected Domain Controllers)?

Can we (do we want to) extend the schema?

It is impossible to discuss and illustrate all possible options in a one blog post, so let’s concentrate on just one example.

Share this:

Today I had some time to look through some forum threads at System Center Central. One of discussions which captured my attention was about the bug in the discovery of SSRS Instances with underscore (_) in the instance name.

I will not write anything about the bug itself, by the end of the day it was an ugly bug. Bugs happen, some sit there for years. Believe me or not, good developers and testers blame themselves and fill unhappy when they step into the situation like this, when they failed to catch that in advance.

In this post I want to discuss another thing – a couple of sentences caught my eye. Here they are:

At least with the VB script, you could run it manually and see where it was failing. The dll process is impossible to do anything with (as far as I know) and leaves us having to call MS every time we get one of those cryptic errors.

Indeed, many modern System Center Operations Manager Management Packs are been implemented using so called SCOM managed modules. I know that many SCOM Admins like to unseal management packs and read the source code to gain an understanding of how exactly they work, how do they suit their needs. Many people think that managed modules do limit their ability to research the logics. In my opinion, that is absolutely incorrect and here is the walk-through for reverse engineering of MPs based on managed modules.

Theory

SCOM Managed modules are in fact .NET classes. Nothing more than that. Those .NET classes are stored in assemblies (.dll files) which are bundled together with the management pack definition into .mpb file. Luckily, a) .NET assembly has tons of metadata inside and b) .NET Framework has a feature called Reflection. There are some tools, which leverage this capabilities and can convert the assembly back to human-readable C# code. This approach is widely used by .NET developers for research and reverse engineering, which is an integral part of the profession.

Practice

1. Get the MSI for the management pack you want to explore.

2. Unpack the MSI:

msiexec /a <FullPathToMsi> /qb TARGETDIR="<FullPathToSomeDirectory>"

3. Unseal the .mpb (this can be done with various tools, MPViewer is one of them)

4. Locate the module you want to research (drill down from the discovery, through data sources, until you find the probe with “managed” implementation)

Now you can browse the code, compare versions, find what and how was changed, and so on. Personally I prefer not to rely on others, especially on the huge software shop located in Redmond. Yeap, we pay money for the software, but other guys pay us money – most probably for getting things done, not for waiting.

Yes, that’s a very long title, but it really describes the issue that was brought to the table last week. The original question was:

All of the notification emails for these alerts only contained {2} for the Subject and Alert Name.

That was something I haven’t heard before and I decided to do a quick research. As I had no access to the Exchange lab this time, I have created a sample management pack which triggers alerts with some random data. There were two monitors: one with the “static” name for the alert message and another with “dynamic” name. Here is the code for dynamic one:

Share this:

It was about a week ago when Microsoft announced an update for the SCOM Management Pack for Exchange 2013. This Management Pack made some buzz, frankly speaking I haven’t seen anything like that since TechEd NA 2014. If you’re curious, take a look at some of the blog posts (link, link, link, link, link, link, link). Impressive, isn’t it?

So, now when we’re done with high-level overview, let’s dive into very details and take a look at what’s under the hood.

Some basics

Please note that this MP is NOT available from the catalog, go to the download page to get it.

Management pack includes 3 files:

Microsoft.Exchange.15.mp – contains new health model definition, as well as new rules, monitors, updated discovery workflow, folders and views. All these things are based on PowerShell scripts, which heavily use Exchange cmdlets.

Note that many rules share data sources, so if you want to play with interval parameters, try to keep the same value for all related rules. Otherwise you may occasionally affect the cook-down and get an extra monitoring footprint for your exchange servers. Considering that some E15 cmdlets are somewhat resource-intensive, this may add some extra headache for your fellow Exchange admins.

Microsoft.Exchange.15.Reports.mpb – contains report definitions. Unlike many other management packs, this MP doesn’t rely on generic reports library (except *health reports) and has its own definitions for performance and top/bottom reports. There is also one very custom report, based on custom dataset – “Top biggest mailboxes”.

Top 6 things I like in this MP

Initially I wanted to write about “top 5” things, but failed to decide which one to exclude. Not all points are equally important for everyone, but, from my perspective, all things mentioned below are at least “interesting”.