Recently I got a chance to put together a presentation and a demo around TypeScript and SharePoint. The presentation was around few fundamental TypeScript concepts and the demo was a walkthrough around implementing a simple use-case on SharePoint List CRUD operations. The apps were built using couple different approaches; jQuery & JavaScript CSOM and then adding TypeScript to the mix.

Had to try something in SharePoint 2013 and was setting up a quick new VM environment. After the OS setup, ran the prerequisites and bumped into this error – The tool was unable to install Application Server Role, Web Server (IIS) Role.

Had been through this one before, and if I recall it right, individual\manual install got me through then. But today, that approach didn’t work!!!

Quickly did some research again and got some threads leading in the following directions

Manual Install via the Server Manager

Using PowerShell to mimic the prerequisites (roles\features) install

Offline installation

Even a hotfix install

More details can be found from this MS article here. Obviously, didn’t get through in any of those hence the blog…the solution I found was kind of…different.

Here, take a look at the screenshot showing few lines from the error log.

Even before we get the actual error, it is complaining about something else and fails earlier…highlighted in red.

It turns out, in the “C:\windows\system32” directory there is no file by the name “ServerManagerCmd.exe”, it’s “ServerManager.exe”…it’s the executable to load the server manager. Once the Root Cause Analysis was compete, the solution was simple…just copy the original file and rename it, add the extra “cmd” in there and it worked just fine.

]]>https://aseempandit.wordpress.com/2016/02/14/the-tool-was-unable-to-install-application-server-role-web-server-iis-role-2/feed/1aseemoneErrorSkype Developer Platformhttps://aseempandit.wordpress.com/2016/01/25/skype-developer-platform/
https://aseempandit.wordpress.com/2016/01/25/skype-developer-platform/#commentsMon, 25 Jan 2016 05:19:00 +0000http://aseempandit.wordpress.com/?p=62Continue reading →]]>The Skype SDKs and APIs offer a pretty powerful and well-rounded offering when it comes to writing variety of communication solutions. The page here details out the various components of the Skype Developer Platform as well as the Supported Topologies. No point in repeating the content again, although I would like to highlight one important thing…the Skype for Business Online is only supported by the client SDK at the moment; the Skype Web SDK doesn’t support it yet. I find the Skype Web SDK pretty interesting and am waiting eagerly to get my hands on the Office 365 compatible version. Meanwhile here is the FAQ addressing few key points.

This particular requirement was to build an app which will connect to Skype for Business Online and get the required data for further processing and\or reporting purposes. I have a Windows 10 with Visual Studio 2015, downloaded the SDK from here and was all set to go. The only challenge was that each time I tried installing it, it kept giving me this error…

Not sure why it was asking for Visual Studio 2010, one would make a pretty reasonable guess that the installer would contains few DLLs (references to object model & web service APIs), sample code and maybe help docs etc. Of course I was not willing to go back to VS 2010.

A quick search in the community returned couple approaches…

The first one dealt with registry edits, the idea is to export the VS 20XX hive, edit the relevant keys to 2010 and then import it back into registry. This would get by the installer check and go ahead with the installation. It worked for some, but I was not very comfortable with that.

The second approach was to use a tool (like WinRAR or likes) and extract the files from the downloaded exe (lyncsdk.exe). The extracted files in turn would then be used to install the SDK. Bingo! that one worked for me. Extracted the files and picked the correct version (between 32 bit and 64 bit) to install the SDK successfully.

Once the installation is complete, another thing to note is that the samples are located at “%PROGRAMFILES(X86)%\Microsoft Office 2013\LyncSDK\samples” (in Windows 10 at least) and NOT over here “%PROGRAMFILES(X86)%\Microsoft Lync\SDK\Samples” as indicated in the documentation. The samples have a descent variation to get one started. The sample folder also contains a zipped file by the name microsamples.zip, definitely worth exploring.

Last but not the least, do remember to copy the Project Templates from the install location (%PROGRAMFILES(X86)%\Microsoft Office 2013\ProjectTemplates) to the appropriate Visual Studio folder (%userprofile%\Documents\Visual Studio 2015\Templates\ProjectTemplates) to ensure the templates are available when you create a new project. There are two categories in there, CSharp & Silverlight. You may choose\copy one or both based on your needs\preferences.

By definition, the title has a reference of contradicting items and don’t seem to belong together. Bear with me for a moment and I’ll connect the dots for you.

First let’s talk a little bit about the Choice Overload issue and for that we need to go eat something. Imagine a new restaurant that you would like to visit and try out over the weekend. You are on the table with your company and are presented with the menu. Open it and you are kind of inching away from your comfort zone, why? Because you are there for the first time, the menu has 10 different cuisines with multiple options in each…guess what, today’s special is equally daunting! More the choices, tougher it gets to decide on something specific. In situations like these, I prefer asking for help and the waitresses are kind enough to oblige

More often than not, the users experience is similar when they are presented with the new Intranet which has 10s if not 100s of features\functionalities. Unfortunately most of the time, there is no warm support from the waitress alongside to explain what the features are for and when should they be used. At that moment they are really looking for a simple Intranet solution that would help them get the information they need to complete their routine activity on a day-to-day basis.

Microsoft having realized this for a while now has consistently been taking steps in the right direction in an attempt to fix this. There has been a fair amount of improvement but unfortunately the fact remains that the Choice Overload eventually turn into User Adoption issues and continues to haunt us.

In other words, the overload of features makes it scary, at varying degree, for the new as well as the experience users to engage with the intranet and wholeheartedly adopt it. More often than not, only 20% of the feature in any area (for e.g. document management) are required by the average user. The remaining 80% features are used by a select few and come in the way instead of helping the average users.

The fear of breaking something keeps them away from using even the required features and they start looking for alternative to go around the intranet to get the job done.

So what would be an effective solution to address this issue? How have some of the organizations been able to deal and get an upper hand over this? To be honest, I don’t have the silver bullet in this two pager, but yes I would love to share what has been effective and worked for our customers that I got a chance to engage with.

Before we get to that though, let’s first define the common core components that were observed across those intranet implementations. If you would like to get to the summary sooner, you are welcome to scroll down to the “wrapping up” section directly and skip the items below.

Core Intranet features

By core component, I am referring to the items that are mandatory and must have for an intranet for organizations of different sizes with different business needs.

Landing Page & Layouts

The landing page & layouts sets the base of a successful intranet implementations and they definitely go beyond the good looks. It obviously has to be pretty looking and inviting so that the user feels like coming back to it, but over and above that the content has to be meaningful with a clear value-add for the users to help in their day-to-day activity. The business’s core focus as well as the internal culture, defines and drives the page layout as well as the content for the organization. Some organizations like to keep the content light and the pages are very simple where as in some cases the need is for a more information rich page which make them kind of busy. Each approach has its pros and cons. Look at how Google and Yahoo have different ways of presenting their search landing page. Not exactly an intranet example, but you get the point.

Along with the organizational culture, the content of the page is also driven by the target audience. Is the page targeted towards the global users, or is it for a particular department or is it built for a special event etc. impacts the layout and content to a great extent. The sequencing of the various content items on each page is equally important. Notice how the announcements are always placed on the top left corner of the page (for a left-to-right language).

Who own the page content and how effective and engaging the page is for the users are vital elements that deserve more than a casual mention; will leave it for later along with the other advanced items.

One item though has earned its place in the must-have list and simply can’t be left out; the responsive and mobile accessible design. For an effective Intranet solution, the landing page has to be mobile compatible and response in nature. The pages have to engineered in such a fashion so that it is easily accessible on the mobile device (not a miniature version of the web page) and the critical items are available without a need to click around to get to them.

Branding

It’s equally, if not more important to have an internal intranet branding for any organization. It gets the users a sense of familiarity & comfort and the feeling of being a part of something; something important.

Depending on the flexibility allowed, it is an excellent way to set individual team identities to some extent as well. For e.g. following the organizational guidelines and permitted colors, some department sites can be more casual and some can be more formal. It goes a long way to bring in the sense of belongingness and build the concept of “My” Intranet versus “A” Intranet.

Along with the regular branding, we also need to maintain the freshness of the intranet at the global as well as the local (department) level. Allowing easy ways to quickly change few items in the branding in a controlled way can be very engaging. Google Doodles is an excellent example on connecting back to the occasion or at least getting the users intrigued about it.

Navigation

Any reasonable size intranet would be classified in multiple sections based on some criteria; departments is a pretty common example. It helps in delegating the administrative activities as well as securing the content access to that piece effectively.

From the end user’s perspective it becomes very important to provide a seamless global and local navigation experience for accessing the various items without getting lost. The navigation strategy has to be designed in the most optimum fashion, such that the number of clicks for reaching point “B” from point “A” has to be minimum and the context has to be preserved too.

Content

Nobody like an empty store. If I am walking into a brand new toy store with my kid and I see empty racks with beautiful name-tags marking the various categories AND a big white banner in each saying “toys coming soon”, I can only imagine his reaction to that. Users’ are no different when they come in an intranet site. They are expecting their favorite toy right in front for them, ready to play! On a more serious note, their documents, pages and other important items have to be there and they have to be organized as close to the way they were in the old system as possible, they don’t like change!

What could be worse than an empty store? Let’s take a different example this time. Imagine someone walking into a store on a Friday evening with an expectation of Total Wines or Bevmo (love you both ) and finding stuff from Lowes (love you too). The reaction is beyond my imagination reach.

To drive the point home, both the above items emphasize on the fact that the users expect the sites to have relevant content organized and available before they start using it. Whether the content is old, coming from the existing system or brand new coming from the content seeding activity doesn’t matter. And I would say this is a very reasonable expectation as well as one of the major cause for making or breaking a successful intranet solution with a high user adoption rate.

Seeking happiness and peace in life is one thing and finding a document in an intranet is another. The former is hard to find (let me know if someone has found it…me still searching), but the later should be comparatively easier and should not turn into an adventurous trip.

No matter how well the intranet is organized ensuring the content is falling in the right bucket, as the content grows it is not humanly possible to remember everything. That’s where search steps in. Search makes the system work for the user instead of the other way around. An effective and accurate search is very important for the intranet solution to be successful. Providing different search scopes is a great way to provide more effective search experience and also a subtle way to educate the user on how the content has been organized along with the various sources available for search.

Am I missing something?

To keep it short, I have obviously not covered all the details in here. For example instead of vanilla content having a vertical friendly content seeding or some key non-functional items like user training etc. Based on the feedback will definitely love to attempt the same in later blogs.

Please do point out though, if you think that there are some other absolute must-have items that I have missed and should have been covered…hoping to get some practical insight from the trenches on what else worked for you.

Wrapping up

A highly simplified version of an intranet could be equated to the black box – data in, information out. SharePoint technical features and capabilities are a means to get there and meet the business needs of the intranet. I like to think of SharePoint as Lego building blocks, with each block being a discrete set of functionality that can be used to build the core as well as extended feature of the intranet solution. As you progress and evolve, you can keep adding functionality at the right time.

The key to get a successful implementation is to keep it simple and pace the feature introduction. Having said that, the Progressive Disclosure technique accompanied by a controlled change management process and user training has been very effective until now. Even though Progressive Disclosure is considered more of a UX technique, the concept can easily be extended for the feature and functionality exposure as well. It helps the users tremendously to start simple, focus on the core functionality first to finish a simple business activity and then if required switch to the advanced functionality without losing the context.

To implement and introduce the feature using the progressive disclosure technique, two patterns can be developed further and used as appropriate. SharePoint Composites can be leveraged to meet simple to low-medium complexity requirements or a more mature robust framework can be built using and extending the SharePoint out-of-the-box features. In either pattern, the assumption is that we need to react fast to meet the inevitable, changing business needs and be prompt. With that basic assumption, how can one quickly adapt, build and enabled new features is the key. Coding unfortunately doesn’t always help.

In the following blogs I would like to share how we have addressed practical and hypothetical scenarios using these patterns, SharePoint composites as well as our product\solution SocialXtend.

Recently in one of the projects involving SharePoint 2013, AppFabric and Windows Workflow 4.5 we faced an interesting issue while installing\configuring AppFabric. Every time the installer would run for a while and then error out with the following message:

The blogs listed above and similar blogs on these lines are talking about ensuring the pre-requisites are met and the PSModulePath variable is set correctly, and they have worked for quite a few people. Unfortunately, we had a slightly different issue at hand and in spite of ensuring that the couple points were taken care of, the installer would simply not go past that error.

Digging further into the logs that were generated post the error message, it turns out that the issue was with the two AD groups, AS_Administrators and AS_Observers being present already; maybe from a previous attempt. This was causing the installer to error out since it was not being able to create these AD groups. We deleted these two groups and then ran the installer again and yup it worked like a charm this time!

Now that I have your attention, the idea of this blog is to share a simple real-life use case on how collaboration can help optimize various activities and in turn boost productivity.

Let me set some context first…

One of the important features for a social\collaboration product is to provide the capability to follow the various activities in the communities of choice and be notified about any activity in those communities. Notifications can be for all the activities or for selected few, depending on the individual user’s preferences. Activities are of multiple types, uploading a document, asking a question, posting a microblog, starting a discussion etc. are some examples. These activities in turn can relate to one or more topics around Technology (SharePoint, ASP.Net, J2EE), HR (New Hire, Events) Finance, Sales etc. and can be tagged appropriately.

Meet Joe, the coffee enthusiast…

Joe, is a great performer and the performance goes to the next level if he keeps getting the regular coffee boosts. He has been with the organization for a while and has developed few like-minded friends from various disciplines who are equally passionate about their work and of course coffee. Creating a community for coffee drinkers would be their dream-come-true, but the corporate IT policies are pretty much strict about the governance and would only allow a new community until and unless there is a strong business justification behind it.

There is a public (everyone in the organization has access to it) community though named HR Events, which is built and owned by HR. The community is built around and for the various internal events and allows people to share some general content for e.g. pictures, small videos, conversations around the current topics etc.

As there is a wide variety of content being exchanged in this community around a wide array of topics, the users typically pick and choose selective notifications on a weekly frequency so that they get a weekly digest of the activities and are in the loop rather than being spammed on each and every activity.

The coffee buzzer…

Leveraging the “Follow Tag” feature, Joe and his friends come up with a simple but effective idea…introducing the Coffee Buzzer!!!

A tag named “Coffeezzz” is created in the HR Events. Joe and his friends all follow the tag; whenever someone makes a fresh pot of coffee they post a short microblog with a mention of the “Coffeezzz” tag in it. No sooner the post is made, the subscribed users get an email notification about the same and if interested they can join in for another engaging conversation along with a hot cup of coffee. The word spreads and more coffee enthusiasts join in the group.

Is it only about coffee?

Coffee can be replaced with any other topic of interest around Technology, HR, Sales, and Finance etc. as shared earlier. The key point here is the ability to have a platform which allows the users to selectively choose and follow the tags and be notified so that they can engage in those relevant conversations.

Since the user is not chasing the information but on the contrary the information is being pushed to the user at the right time, the user can focus on other important items and not be worried about losing something important. In a public community like the one mentioned, it sparks great ideas when people from different disciplines with varied perspectives engage in conversations.

The same concept can be carried to more controlled (private or secret) communities, where the content as well as the audience can be restricted to the required level and still facilitate collaboration.

Would love to hear from you what platform has been put in place in your organization to meet the business collaboration needs…and in return we would love to talk about how SocialXtend has helped us as well as our customer’s to empower & engage employees at different levels.

I was getting ready for the upcoming webinar on Social Intranet Using SharePoint 2013(details due) today evening and in the process came across this error while configuring the Search Application.

The error messages goes like this…

“Unable to retrieve topology component health states. This may be because the admin component is not up and running.”

Not a very welcome site, especially when you are setting up a SharePoint 2013 demo environment on a single machine, Windows 2012 environment (excluding the DC) and have already wrestled with the “Set-SPEnterpriseSearchService -PerformanceLevel Reduced” command along with the side effect and wiping away the sweat and blood

Oh ya, let’s jump into the ULS logs and check out what’s it complaining about this time!!! For a change it turned out to be a pleasant surprise, these are some of those moments that we should surely relish…even though the error message was buried deep in the stack trace the need was very clearly mentioned…Good Job SharePoint Team!!!

Indeed, I had been playing around with the environment and had messed up the DNS entries…changed the IP, cleared the DNS cache, got the connectivity back between the two machines and here we have the Search Application Components showing up nice and bright with a green arrow next to it.

Was curious to check out on the net, what others are doing to fix this issue…here are some blogs that I think may be helpful to you if it’s not the DC connectivity issues as in my case. Even though error messages is the same, the root cause and the solutions are different…

]]>https://aseempandit.wordpress.com/2013/12/24/unable-to-retrieve-the-search-application-topology/feed/3aseemoneSmileReporting on the SharePoint External Listshttps://aseempandit.wordpress.com/2013/12/19/reporting-on-the-sharepoint-external-lists/
https://aseempandit.wordpress.com/2013/12/19/reporting-on-the-sharepoint-external-lists/#respondThu, 19 Dec 2013 03:09:00 +0000http://aseempandit.wordpress.com/?p=28Continue reading →]]>Today we faced an interesting issue, was the limitation technical or human or combination of both, not entirely sure about that. Thought it’s worth sharing as it definitely ties with the thought process which involves both the aspects. I am simplifying the scenario as well as abstracting the client details for obvious reasons.

The requirement goes something like this…

There are different users in the system who have different levels of access and play different roles. Some are responsible for entering the data and some of them can look through the entered data, edit them if required as well as draw reports out of them.

The technical stack consists of SharePoint as the base, InfoPath is used as the forms technology and the data from the forms finally lands into the SQL Server database in the backend. The InfoPath is connected to the external list which is using BCS in turn to get the data from the same SQL Server.

The dilemma…

There is a reporting and an analytics need on the data present in the external list. The issue is that the external lists are not easily exportable into excel, so then what is the alternative?

The solution…is it?

Since it is tied into the SharePoint thing, your brains starts to spin and yell some fancy SharePoint terms and you go SharePoint WorkSpace, SharePoint Designer and even Report Builder 3.0. The challenge is that there isn’t anyway to consume that information using these tools directly unless you opt for a customized solution (involving code) and not a configurable one.

So you start to think harder and in the process start making things more complicated, it’s a “DEV TRAIT” there is a natural affinity towards complicated solutions …you also start looking into code based solution or maybe 3rd party solutions to make it happen.

Wait a min…need a break!!!

Now it sounds familiar isn’t it…there is a problem and then you start looking for a solution, a solution comes to your mind but has some limitations, you start looking for addressing that limitation next and in the process you get carried far far away from the actual problem that you were trying to solve…has happened to me multiple times, not sure about you?

I think this is exactly the time to take a break…and then comes the eureka moment!

Any external data items that are brought into SharePoint using BCS are not stored in SharePoint’s content database. Here is blurb from an MSDN article just in case you want to check on that.

“Notice that unlike a SharePoint list whose data is stored in the SharePoint content database, the data in an external list is stored only in the external system. External data is brought into the SharePoint list at run time when you navigate to the list.”

So there is a simple solution now to the reporting\analytics need; instead of going around the way and accessing the data from the SharePoint’s external list we can use the SQL Server database table\view directly and access it. Since the (committed)information always remains the same you will get the same results as the SharePoint list and you could build fancy report or do pivoting using excel etc. And it worked just fine for this situation.

More than the technical part of it, the intent of sharing this incidence was to highlight how we at times get trapped in our own thought process and force ourselves into a different direction than what we initially desired!

Let’s say we have a wiki library which has a event receiver attached to the “ItemAdded” event. We write some piece of code that is supposed to fire when an item is added. For the sake of simplicity, let’s say we want to audit who created a new wiki page, so we log it in some way. After writing & deploying the code what we observe is that not only is the “ItemAdded” event fired but also the “ItemUpdated” event fires. Why does SharePoint do that?

We just want to log\track “a” single event, since it’s very much possible that the user may not want to edit the page or may just cancel it out.

Q:how do we conditionally fire the event?

One possible solution:

Before we look at the possible solution, couple of facts:

As you may know that each event (ItemAdding, ItemAdded, etc. check this out for the complete listing of SharePoint Events) generates a new “instance” of the class that extends the base class SPItemEventReceiver.

The ItemAdded event is always going to fire before the ItemUpdated event, irrespective of the Synchronization property.

The issue is to persist the fact that a new wiki page is being added and preserve it between the “ItemAdded” event and the “ItemUpdated” event and then based on that information, conditionally fire the ItemUpdated event. Here is where a “Key-Value” based collection can come to our rescue.

The approach:

Here is a high level mention of the steps…

Create a static variable of a collection type for e.g. Hashtable

Use the “ListItemId” property from the event args (default name, properties) and store it as the “key” in the variable created in step 1; the value part being a flag or status of some kind…for e.g. I used “new”

In the “ItemUpdated” event use the same variable created in step 1 and then get the flag\status using the same “ListItemId” property from the event args. Once the value if available, we can then do the conditionally processing.

So even though, there are multiple events firing in there(which is technically correct), we can still isolate them and conditionally do some processing. One important thing to remember is that once we have used the value in the “ItemUpdated” event from the collection variable, we need to delete that so that the variable is light-weight and doesn’t cause unnecessary load.