rdhaundiyalhttps://rdhaundiyal.wordpress.com
Mon, 19 Mar 2018 12:15:49 +0000enhourly1http://wordpress.com/https://s2.wp.com/i/buttonw-com.pngrdhaundiyalhttps://rdhaundiyal.wordpress.com
Sitecore 9 : Content Editors Federated Authentication with Gmailhttps://rdhaundiyal.wordpress.com/2018/02/08/sitecore-9-content-editors-federated-authentication-with-gmail/
https://rdhaundiyal.wordpress.com/2018/02/08/sitecore-9-content-editors-federated-authentication-with-gmail/#respondThu, 08 Feb 2018 08:25:42 +0000http://rdhaundiyal.wordpress.com/?p=96Continue reading "Sitecore 9 : Content Editors Federated Authentication with Gmail"]]>Recently in one of my Sitecore project, I got a requirement where content editor can log in using third party identity provider like google. In my previous project, I have used multiple times to authenticate the website user but for the Sitecore content user it was a bit different. There were multiple articles which I referred to implement this and this article is basically a consolidation of those articles along with some changes related to user builder and google authentication provider. Below are few references which are worth reading as they provide the flow in depth.

Though Sitecore 9 provides out of the box feature for OWIN authentication, there are few places where you might end up writing some piece of custom code. Below article shows how you can authenticate the content editor through google.

Before starting the Sitecore part make sure you have created a google application and have corresponding client id and secret which can be used for google authentication.

Override the ProcessCore method where you set the provider to GoogleOAuth2AuthenticationProvider which is provided by Microsoft identity providers and in the last set app to use googleauthentication as below

Reset the application pool and try to run the Sitecore instance. You will get a screen like below with an additional button to login using google.

On clicking you will be redirected to google login page and after providing the user you will be redirected back to Sitecore login page with the error below

Now the user is created in sitecore but it does not have any access to the system. Admin user need to provide the access so that the user can use Sitecore cms as editor.

Before that, one more thing we need to change. The default implementation of ExternalUserBuilder in Sitecore create a user name with a GUID which is very difficult to identify. To resolve this issue, create another class CustomUserBuilder inheriting from ExternalUserBuilder and override the CreateUniqueUserName method to pass email as user id.

Now, if you login as a admin user you will see the user created in Sitecore

Provide appropriate member role to the user. The user should now be able to login.

One important thing to take care of while using external provider is that the access to URL should be protected from the website user or else you will end up having so many users created in the Sitecore system which are not content editor and also it can be a possible security threat as well.

Advertisements

]]>https://rdhaundiyal.wordpress.com/2018/02/08/sitecore-9-content-editors-federated-authentication-with-gmail/feed/0rdhaundiyalSCGL-initialSCGL-first loginSCGL-RoleSCGL-logged inAutofac as DI container in Sitecore Helix architecturehttps://rdhaundiyal.wordpress.com/2017/09/02/autofac-as-di-container-in-sitecore-helix-architecture/
https://rdhaundiyal.wordpress.com/2017/09/02/autofac-as-di-container-in-sitecore-helix-architecture/#respondSat, 02 Sep 2017 06:41:35 +0000http://rdhaundiyal.wordpress.com/?p=54Continue reading "Autofac as DI container in Sitecore Helix architecture"]]>The following article describe how to use Autofac as DI container in Sitecore application based on Helix architecture. As you must be knowing in Helix architecture the whole application functionality is divided into multiple features with each feature being an independent functionality not dependent on other features. This article is based on the article written by Kevin Brechbühl at https://ctor.io/one-way-to-implement-dependency-injection-for-sitecore-habitat/

The only difference is that instead of creating processes in individual feature project I will be registering all the dependency at one place using the module feature of autofac.

To start with we will create a project in foundation layer with name “ProjectName.Foundation.DependencyInjection” . I have given my project name as Piccolo and hence the project name will be “Piccolo.Foundation.DependencyInjection”

Follow the following steps after creating the project

Add nugget package for Autofac to the project. Command line for nugget is as below: Install-Package Autofac.Mvc5 -Version 4.0.2

We will be creating a custom pipeline in order to set autofac as dependency injection container in Sitecore pipeline. Add folder named Pipelines in the “Piccolo.Foundation.DependencyInjection”

Inside Pipelines add another folder InitializeContainer and within InitializeContainer add a class InitializeContainer.cs.

Add another folder Foundation to the project “Piccolo.Foundation.DependencyInjection” and add config file Foundation.DependencyInection.config to it.

The project should look like as below once you have finished the above steps.

Populate the class InitializeContainer with following listing
public class InitializeContainer

Tridion object cache invalidation using JMSCacheChannelConnector provided with Tridion CD send messages in binary which DD4T 2.0 client based on .net is not able to understand. The jar file available in link above acts as a deployer extension and converts the binary message into textmessages so that any .net client can understand it. The problem with this is once it gets converted into text message, Tridion object cache subscriber in cd_cache jar start throwing error “Ignoring unexpected message type”

To solve this issue we need to modify the subscriber in cd_cache as well so that it should be able to understand text message. Below is the full code listing of the changes done in dd4t-cachechannel jar file.

As you can see in the code validate() method is overriden in the class TextJMSCacheChannelConnector so that we can provide our own handleJmsMessage(Message msg) method. This method checks if the message is a text message then convert it into CacheEvent object and pass to handleRemoteEvent() method of CacheChannel class to invalidate the message.

While publishing all the deployers can be configured on one publishing target so that the operation is performed in transaction.

PossibleIssues:

Publishing time will increase with increase in VM’s

New deployer need to be configured or removed while scaling up or down i.e. addition or removal of a VM

File replication script like robocopy

In this approach the files are published to a single physical location on a VM. A scheduler running on that VM will execute a robocopy script which will sync this folder to the website folder on different servers.

PossibleIssues:

The publishing time and changes reflected depends on the frequency of the scheduler. For e.g. if the time set is scheduler is 5 min, the changes will reflected on the website after 5 min. Also with increase of VM’s as well as files and assets, this time will increase.

There is no guarantee that all the folders will be in sync as it is quite possible that script fails after syncing few VM

Creating a shared network folder and pointing all the website so that all the content is published to this shared folder and all the website instance on different VM point to this one.

PossibleIssues:

This approach does not have any of the issue mentioned above but the major challenge is SPOF. If due to some reason the network folder is not available it will result in all the website bringing down. Also, you will have to provide some explicit mechanism of backup else the data will be lost.

Quite similar to above approach but without the single point of failure issue is using azure file storage which has high availability as well as high performance. With Azure File Storage, the web content can now be stored independently of the web server.

Possible Issues:

If the file storage is in a different geographical location, there might be performance issues

Azure file storage is a highly scalable and highly available file storage which can be accessed by application running on different VM on azure just like a network shared path

Following information will be required while implementing Tridion CD website using azure file storage

Create an active directory user with same name as account name and password same as account key of azure file storage. If no active directory is available you will have to create local user with same name password on each VM hosting the application which will be difficult to maintain but will work. Remember to set password to never expire.

Create web application on IIS and in physical path provide the UNC of azure shared file storage. Make sure you have copied all the website physical assets in a folder on this shared storage.

Click on connect as and select specific user radio button

Provide the credentials of the domain user and password and click ok

Since xmogrt.dll is not a .NET assembly, it will not be accessible from network location. You will have to delete this dll from bin folder of your application and copy it to %Systemdrive%\windows\system32

Add the domain user to IIS_IUSRS group on local system

Recycle application pool

Repeat step no. 2 to 6 on each web server which are going to be attached to loadbalancer.

Setting up deployer to publish to shared file storage

There are only two changes required in http deployer to deploy files on shared storage.

In cd_storage.xml, in storage section for file system provide UNC path of storage