All posts by Sunil Singhal

Often a requirement comes to secure the application as well as the connections made to that application.

Prior to TLS 1.2, many versions of SSL and TLS came into existence to enforce transport layer security. Those previous versions were vulnerable to some sort of attacks\threats and those were fixed in their next version.

In order to enforce security, you may just want to accept connections over TLS v1.2 and thus only enable TLSv1.2 while disabling all other versions- SSLv3, TLS 1.0, TLS 1.1 etc

The purpose of this article is to list down the steps required to enable only TLS 1.2 and disable all other versions in a Springboot Application.

PRE-REQUISITES

JRE

IDE of your choice

Springboot Application

Certificates – be it Self Signed or from Public CA

This article assumes that your application has already enabled SSL in your application and configured certificates and secure HTTP Connectors either programmatically or through configuration.

HOW IT WORKS?

Before we look into the steps, lets first understand how things work. Basically, an application sets up a virtual host/container – Jetty or Tomcat or Undertow etc as well as HTTP Listener(s).

In a Springboot application, embedded containers can be setup using

EmbeddedServletContainerFactory

during bootstrapping.

For tomcat,

TomcatEmbeddedServletContainerFactory

is initialized and likewise. These containers set up Connectors (HTTP) and configure connectors for

Port

URI Encoding

SSL Settings optionally

Compression optionally

Protocol Handler etc

HOW TO DISABLE SSL or < TLS 1.2 ?

In < Springboot v1.4.x versions

For Springboot applications with versions < 1.4.x, there is not any support to disable protocols through configuration. APP YAML configuration has few properties to enable SSL but it does not provide a mechanism to set SSL enabled-protocols

Thus, changes have to be done programmatically.

But how?

Do i need to initialize Tomcat Factory and Connector and stitch everything ?

Luckily, not. Springboot allows to customize the existing Container and further customize Connector.

Does that mean i just need to create Customizer and somehow attach it to the existing initialized container?

Yes, that’s right.

Add the below code and Your Problem will be solved. What we are doing is that during Service bootstrapping process, we are injecting a

Amazon Web Services aka AWS provides many SaaS products.
In this post, I want to share my learnings and experiences while working on one of the SaaS Products called LAMBDA.

I’ll begin with explaining our use case a bit and then implementing and Deploying a Lambda.

USE CASE

I was working on designing and implementing on a requirement to ticket the Air Bookings. Without Ticketing, user cannot board a flight and thus fly.

MORE ABOUT TICKETING PROCESS

Ticketing is an orchestration of series of steps, some require Biz Logic evaluation and some require interacting with different 3rd Party Services multiple times over the network.

This process can be seen as event driven, can be done asynchronously with retry capabilities, scheduling capabilities, involving interaction with 3rd Party Services over the network.

It has to be completed within time constraints as per Airlines\GDSes otherwise user cannot fly.

After gathering requirements, it seems to be a usecase of building a BOT, a Ticketing Bot, more specifically and “Executor-Schedulor-Supervisor-Agent“ Pattern fitting very well technically.

WHAT IS “EXECUTOR-SCHEDULOR-SUPRVISOR-AGENT“?

It’s a Pattern where in roles and responsibilities are clearly separated out to different actors\components.
Executor, Supervisor, Agent represent different Blocks and each is responsible to perform clearly defined task.

Executor is responsible to execute the Orchestration and likewise for other. You may choose to use Persistent Workflow frameworks, Queues for orchestration execution.

WHERE DOES LAMBDA FIT IN OUR CASE?

Ticketing Process has to be completed for multiple bookings. After all, multiple users are doing bookings on our site.

This demands multiple executors to be running in parallel and executing an orchestration independently with no interference.

Obviously, you will want that each executor picks a different Booking for ticketing.
For this, you will have synchronization and other checks in place so that once booking is owned by any executor, it does not get executed by another Executor.

Let’s say, we have a strategy that once a booking is picked by an Executor, executor updates a workItem with it’s ownership, timestamp and changes it’s status to In_Progress to reflect that Ticketing Process has been kicked in.

Now think of a scenario where in

an executor(s) (Server) performing a ticketing process, crashes in the middle of the process.

Or, you want to deploy the incremental changes and that may involve halting\interrupting the currently executing Ticketing Processes.

The 3rd scenario can be dealt with publishing Events to reach to a consistent state and stop further processing.

But, what about other Scenarios ? In that, Ticketing Process(es) will appear to be running with In_Progress status while that’s not the case.

How will you ensure that those Processes get completed later?

We will for sure want to complete the Ticketing Process at any cost.

What if we have something which can detect such Stuck Bookings and reprocess them from the last checkpoint.

Lets just focus on Supervisor.

What is the role of “Supervisor”?

Supervisor is a component made responsible to detect such Stuck Bookings and queue them for further re-processing. Note that it does not start executing those processes, instead it just re-queues them so that an executor an pick it up again.

In our case, Supervisor has to connect to Queues\Data Stores hosted in VPC.
Ok. What are the other expectations from this Supervisor?

It has to be Available. You would not want your Supervisor to be down for a long time. However, you would want that when

A Single Supervisor can fulfill the need. No need to run the multiple Supervisors at a time.

Supervisor running periodically.

Supervisor running in background

Supervisor has no state attached to it

All the above expectations made LAMBDA a good Fit in our case.

Enough of the story 🙂 Before you start cursing me, let’s start building a Lambda.

LAMBDA

Lambda is a function that can be executed in AWS Cloud environment based on certain trigger policies. Trigger can be a scheduled timed event or S3 event or likewise.

Another problem, i have. I have different environments set up and in each environment, i have different settings, say mongodb cluster is different.
I want to package resource files in jar and load them as per environment rather than configuring each setting as an environment variable.

How can i initialize based on an environment?

Once again, AWS comes to a rescue. It provides an ability to specify environment variables while configuration and these environment variables get passed to Lambda Function as Environment Variables on each execution.
What if we set the Environment and based on it’s value we load the resource file like Spring loads the configuration based on Profile.

DEPLOYING LAMBDA

Using AWS CMD CLI (Command Line Interface) to upload jar and other required/optional configurations

Through AWS console where in you can provide different configurations

HOW CAN I\WE ACCOMPLISH THIS?

We use different environments like Test environment, Stress etc before releasing to PROD and in each environment, we want to have different settings. How can we pass different settings like we can activate different Profiles in Spring? [ANSWER]: AWS allows to configure and pass environment variables to a Lambda on execution. While configuring a Lambda Function, define what environment variables need to be passed to your Lambda and then based on those environment variables, do things.

Our Lambda needs to connect to components\services deployed in our VPC. On execution, Lambda function is not able to connect to that component. [ANSWER]: AWS considers and enforces Security . To allow connections, configure Lambda with proper SubnetIds of your VPCs and permissions.

Our Lambda is not Event driven. It’s based on files wriiten in S3. How can we pass event data to Lambda?[ANSWER]: This blog focussed on Lambda with no event data, however AWS supports different events. Refer AWS. In order to pass Event Data to Lambda Function, handler function can accept more parameters. Parameter can even be of Custom Type and AWS takes care of Serialization and De-serialization.

THINGS TO KEEP IN MIND

AWS puts restrictions on executing Lambda – be it a size of the jar or constraints on resources like cpu, memory etc. Always check restrictions on AWS Site before considering Lambda.

Make sure that you understand the billing. Lambda is billed based on resources usage and the total time for execution.

We often come with requirements which are suited for integrating Messaging Frameworks in the Software Systems.
There are many messaging frameworks available in the market – some are open-source, some are paid-licensed, some provide great support, have good Community support.

In order to make an apt choice, we look out and explore different messaging frameworks based on our requirements.

This post compares few Popular Messaging Frameworks and aims to provide or equip you with enough information to make a decision on choosing the best framework as per your requirements.

COMPARISON GRID

RabbitMQ

Apache Kafka

AWS SQS

HA

☑ Requires some extra work and
may require 3rd party Plugins like Shovel and Federation

☑ Out of the Box (OOB)

☑ OOB

Scalable

☑

☑

☑

Guaranteed Delivery

☑ Supports Consumer Acknowledgments

☑ Supports Consumer Acknowledgments

☑ Supports Consumer Acknowledgments

Durable

☑ Through Disk Nodes and Queues with extra configuration

☑ OOB

☑ Message Retention upto 14 days max and default being 4 days.

Exactly-Once Delivery

☑ Annotates a message with redelivered when
message was delivered earlier but consumer ack failed earlier.
Requires Idempotent behavior of a Consumer

☑ Dependent on Consumer behavior.
Consumer is made responsible to track Offsets (messages read so far) and store those offsets. Kafka started supporting storing offsets within Kafka itself.It supports storing Offsets OOB through HIGH LEVEL CONSUMERS, howeverRequires Idempotent behavior of a Consumer

☑ No Limits but
Standard Queues: 1,20,000 In-Flight Messages
FIFO: 20,000 In-Flight Messages
Details here and here
Messages are In-Flight after they have been received from the queue by a consuming component, but have not yet been deleted from the queue

Message Content Limits

☑ No Limits

☑ No Limits

A message can include only XML, JSON, and unformatted text. The following Unicode characters are allowed: #x9 | #xA | #xD | #x20 to #xD7FF | #xE000 to #xFFFD | #x10000to #x10FFFF
Any characters not included in this list are rejected.

Using Hibernate and Struggling with querying DateTime Column in RDBMS (like MS-SQL) in specific timezone?
No matter what Timezone your DateTime object has, while issuing hibernate query,
do you observe that Time in Default Timezone of JVM is always getting passed and thus not giving you desired results?

If that’s the case, this article describes a process to achieve querying DateTime column with specific timezone.

WHY THIS HAPPENS?

It is because your Application Server and Database Server are running in Different TimeZones.

If your Application Server and Database Server are running in different TimeZones, we need to ensure that the Date Time query parameter values shall be sent as per DB Timezone to get desired results.

Let’s understand how does Hibernate\DB Driver forms a Sql Query in the next section.

HOW HIBERNATE CREATES A QUERY?

On an Application Server, DB Driver forms a Command before sending it to RDBMS. Database System then executes the query (may compile if needed) and return the results accordingly.

DB Driver instantiates a Command in the form of PreparedStatement object. Then, DBConnection is attached with the above Command Object on which this command will be executed. Since we want to query by certain parameters,DateTime in our case, DB Driver sets the query parameters on the command.

PreparedStatement exposes few APIs to set different parameters depending upon the type of the parameter.
To pass DateTime information, various APIS being exposed are:

setDate

setTime

setTimestamp

All these functions allow passing Calendar object to be passed. Using this Calendar object, Driver constructs the SQL DateTime value.

If this Calendar object is not passed, Driver then uses the DEFAULT TIMEZONE of the JVM running the application. This is where things go wrong and desired results are not obtained.

How can we solve it then?

DIFFERENT APPROACHES

Setting same timezone of the Application Server and of DB Server

Setting timezone of the JVM as that of DB Server

By extending the TimestampTypeDescriptor and AbstractSingleColumnStandardBasicType classes and attaching to the Driver

1st and 2nd Approaches are fine, however these can have side-effects.

1st can impact other applications which are running on the same system. Usually, 1 application runs on a single server in Production or LIVE environment, however, with this we are delimiting the deployment of other applications.

2nd approach is better than 1st one since it will not impact other applications, however, the caveat here is what if your application is talking to different DB Systems which are in different timezones. Or, what if you want to set TimeZone on only few selected Time Fields.

3rd approach is flexible. It allows you to represent different time fields in even different time zones.

INTRODUCTION

Often a need arises to migrate the data from one System to another system. These Persistent Data Systems, Source and Destination, could be entirely different, from different vendors.
It could be due to change in requirements or technology advancements.

Add to it the changes in above tier which is making use of Persistent System.
To make sure that everything works fine on new system, you may plan to start executing small %age of traffic on New System and calibrate\compare the results with Old Stack results.

For proper calibration and find out the differences in result set from Old and New Systems, the task in hand is to synchronize the Data across 2 systems, being stored differently in different systems.

If that’s the case, this article can help you achieve Data Synchronization across Heterogeneous Systems on an ongoing basis.

This article aims to present the concept to seamlessly move the data incrementally from your current data storage system to different data storage system, be it on Premise or on cloud.

TERMS USED

Batch: Means a collection of data records to be moved acrossBatchState: Represents the Status of Batch Transfer, whether it is IN_PROGRESS, FAILED, COMPLETEDMetadata: Represents the batch details which will help in detecting the next batch of data to be synchronized

WHICH COMPONENTS ARE INVOLVED?

Data Source: Actual source containing the original data to be synchronizedData Destination: This is the Persistent System where you want your data to be moved toSyncer Component: Responsible to detect the incremental changes and synchronizeTransformer Component: Responsible to transform the source data structure into Destination DS. This will be required if you restructure the data.Tracker System: Responsible to store the status and the details related to last batch of data being sync’ed

Below diagram depicts the Problem statement of Sync’ing the On-Premise RDBMS data to No-SQL, MongoDB Storage System in AWS Cloud.

WHY INCREMENTALLY?

You may have a huge data in your storage system which you cannot move in a single operation. This could be due to resource constraints of memory, network etc which may hinder the data synchronization.

And what if this data is changed frequently by Biz users. Doing Full synchronization each time can prove to be costly.

How can we reduce this cost? How do we increase the chances of successful data synchronization?

How can we make this process resilient and resume from the point where it stopped or failed the last time?

How about splitting up the Data to be synchronized?

How about defining a batch of data, pull up the data of this batch only and then transfer this data batch?

In order to accomplish this, we need to store the details using which we can determine, how much data we have already sync’ed and what is the next batch of data that we need to sync.

HOW IT WORKS?

Before we go further into steps involved, lets understand the batch Metadata.

batchOffset is the marker. Based on the Status, you can compute from where to begin or resume the process. So, if last batch was successfully sync’ed, next batch to be sync’ed starts with batchOffset+batchSize, or, otherwise, batchOffset in case the last batch failed.

batchSize denotes the no of records you want to sync in a single operation and thus it also tells the amount of data.
It shall neither be too small (otherwise resulting in more roundtrips and more processing time) nor too big (otherwise requiring more resources – memory, network bandwidth etc)

status denotes the sync operation Status of the batch

isIncrementalModeOn denotes whether sync process is just pulling up the incremental updates (including additions) or not. This does mean that source data had been completely synchronized once.

rulesUpdateDateTimeBeginPickedUpForMigration and rulesUpdateDateTimeEndPickedUpForMigration denote the time boundaries for incremental updates. These are useful in pulling up the incremental changes during this time period.

migrationStartDateTime and migrationEndDateTime are useful for tracking purposes to determine how much time did this batch sync take.

With this information, let’s see the sequence of events which happen to sync the batch of data.

The process is initiated or resumed with Syncer component.

Syncer pulls up the last migrated batch details form the Tracker system.

Using Batch Metadata, it identifies the next batch of data to be synchronized.
It makes an entry into Tracker System to store the next batch metadata with IN_PROGRESS status.

It then builds the query, pulls up the records as per next batch metadata from Source system. You can use any ORM, hibernate or jpa to get the data.

It then delegates to Transformer to transform the source data structure to destination data structure.

With transformed data, it identifies the data to be created and data to be updated and accordingly splits the data.

It then sends data to Destination System.

Depending upon the operation status, it marks the Batch either as COMPLETED or FAILED status.

And, these sequence of steps go on till there isn’t any more data to sync.

At this point, isIncrementalModeOn is saved as TRUE in the Tracker system and post this, SYNCER System can tweak the query to pull the data records for a time window.

This slideshow requires JavaScript.

BATCH PROCESS STATE

In case you want to have Primary, Secondary Sync Processes so as to guarantee the High Availability of the Sync Process, we need to maintain and detect the various states of a Sync Process. With this data, we can ensure that at a time, no 2 sync processes are running.

BATCH STATES aka STATUS

Every individual batch of data goes through few states. Below diagram represents the various states, a batch goes through in a syncing process.

THINGS TO KEEP IN MIND

Idempotency and Duplicacy Prevention:

We are transferring a batch of records. Therefore, it may happen that batch gets Partially Succeeded, meaning that few records got sync’ed and rest failed due to any reasons. In such cases, if you retry posting the data, it may result into same data getting saved twice or more. To prevent this, query what data has to be inserted and what data has to be updated. You can make use of indexes or similar concept.

Timezone Differences:

Syncer system and Data Source System can be in different timezones or source data may be stored in a specific Timezone. So, if you are pulling up records based on time window, make sure that timezone information is converted into source system before querying.

Security:

For sensitive data, you can enable SSL/ TLS over transport layer. Also, You may want to have authentication and authorization enabled on both data ends: Source and Destination Storage Systems.

Hard Deletes:

Soft Deletes like making biz rule inactive or likewise will be taken care by Syncer process. What if tuple is hard deleted from a source storage. For Hard deletes, you may have to use Triggers to catch the deleted tuples.

Alert Mechanism to detect Stopped Sync Process:

Sync process can also fail due to any reason. Without any alerting mechanism, it may go unnoticed and these Heterogeneous Systems can go out of sync. To prevent such circumstances, log start, stop events into some sinks like Splunk and have Alerts on them.

WHAT QoS PARAMETERS ARE IMPLEMENTED?

Eventual Consistency

Guaranteed Sync

FaultTolerance

Idempotency

Also, Updates while Sync are not missed

HOSTING MECHANISM

There can be multiple ways to host a Syncer process. Depending upon the traffic your consuming application takes, you can

Either host the syncer process under the same application which relies on this data

Or, host it under a separate process and schedule it using AWS Lambda or AWS Batch

ALTERNATIVES

Amazon DMS also offers the ongoing data migration however it supports only selected Storage Systems. At the time of implementing this, Amazon DMS Offering does not have MSSQL –> MongoDB supported.

If you want to sync data to AWS RDS, Amazon DMS can be used.

Also, if you have huge data ranging in hundreds of TBs and a limited network bandwidth and wants to get this done quickly and only for once, AWS Snowball is another offering you can use.

In this article, i will talk about the Running Instance Health, what can represent the Health, how can we detect the health and how can we use this health information to make the System resilient.

Health, basically, defines how well an instance is responding. Health can be:

UP

DOWN

REAL LIFE PROBLEM
Imagine you reach a Bank and found it being closed. Or, Imagine you are standing in a bank counter queue and waiting to be served. By the time your turn arrives, person sitting at a counter goes away. May be that person is not feeling well.

How would you feel in such a situation? Irritated? Frustrated?
What if you would have been told upfront about this situation? Your time would not have wasted. You would not have felt bad.

But what if someone else takes a job of that counter and start serving you.

Now, imagine a pool of servers hosting a site which allows you to upload a video, say http://www.Youtube.com. You are trying to upload a small video of yours on a site and every time you try to upload, you get some error after sometime and video could not be uploaded.

Basically, Software Applications like http://www.youtube.com run on machines – be it physical or virtual in order to get desired results. Executing these applications require machine’s local resources like memory, cpu, network, disk etc or other external dependencies to get things done.
These resources are limited and executing multiple tasks concurrently put a risk of contention and exhaustion.
It may happen that enough resources are not available for execution and thus the task execution will eventually fail.

In order to make the system Resilient, one of the things that can be done is Proactively determine the Health Status and report it – to LoadBalancer or to Service Discoverers etc whenever asked, to prevent or deal with the failures.

Reporting a health Status with proper Http Status Codes like 200 for UP and 500 for DOWN can be quite useful.

WHAT CAN DEFINE INSTANCE\PROCESS HEALTH?
Below is a list of some common metrics that can be useful in detecting the health of an instance:

Pending Requests

Container Level

Message Level

Latency Overhead – Defined as the TP99 latency added by this application/layer

TP99 or TP95 or TP75 as per your Service SLAs

Resources

% Memory Utilization – Leading towards OOM

% CPU Utilization

Host Level

Process Level

Number of Threads

Any Business KPI

External Dependencies Failures optioanlly

Identifying a list of above criterias is important as well as choosing the correct Threshold or Saturation Values as well.
Too low values or high values can result into system unreliability.

WHY IS IT IMPORTANT?

System is usually expected to be highly available and reliable. High Availability can be achieved through Redundancy where in multiple server instances are running in parallel, processing the requests and thus the demand.

What if One or more instances are running out of resources and thus not able to meet the demand.

Detecting such a state at an appropriate time and taking an action can help in achieving High Availability and Reliability of the System.

It helps in making the system resilient against failures.

ACTIONS ON DETECTING UNHEALTHY

REPLENISH thru REBOOT: If you have limited servers pool capacity and cannot increase the capacity, the unhealthy machine has to be restarted\rebooted in order to get it back to healthy state.

REPLACE: If you have unlimited server capacity or using Cloud Computing Framework – AWS, Azure, Google Cloud etc, rather than rebooting the machine, you have an option of starting a new machine and killing and removing the old unhealthy machine from processing the requests.

Once an instance is detected unhealthy, instance shall be replenished or replaced.
Either that unhealthy instance shall be rebooted to get it to Healthy state or be replaced with a new server which is put behind LoadBalancer and old being removed from LoadBalancer.

Spring Boot includes a number of built-in endpoints.
One of the endpoints is the health endpoint which provides basic application health information.
By default, the health endpoint is mapped to /health

On invoking this endpoint, Health information is collected from all HealthIndicator beans defined in your
ApplicationContext and based on Health Status returned by these HealthIndicators, Aggregated Health Status is returned.

Spring Boot includes a number of auto-configured HealthIndicators and allows to write our own.

Since we keep track of certain metrics in our applications, we wanted an ability to evaluate Health based on certain
Metrics’ values. For e.g., if Number of Thread exceed ‘n’, Health shall be reported as DOWN

For this purpose, CompositeMetricBasedHealthEvaluator is implemented.
It relies on either MetricReaders or PublicMetrics to get the Metrics’s current values and evaluate the
Health accordingly.

It reports the Individual Health of all configured Health indicator Criterias and reports Health as DOWN If any of
them is Down.

For Unavailable Metric, Health cannot be determined and thus reported as UNKNOWN for that specific metric.

With the above configuration, 2 Criterias are defined and **HealthCriteriaList** object gets instantiated using
Configuration Annotation.

Here, Thread Criteria specifies that for Health to be **UP**, number of threads < 100.
If NumberOfThreads >= 100, Health will be reported as **DOWN**

Likewise, more criterias can be defined.

Note that
* **metricName** can contain ‘.’ character as well.
* **thresholdOrSaturationLevel** can have any Valid Number, be it Integer or Decimal Number
* **operator** can be any valid value from ComparisonOperator enum.

The below configuration instantiates MetricBasedSpringBootAdapter with MetricReaders only.
Both Parameters, healthCriteriaList and metricReaderList are injected automatically through Spring application
context. This happens due to auto configuration.

The above configuration can be useful wherein MetricReader is not available to read the Metric but Metric is
available publicly through PublicMetrics interface.
With the above configuration, all parameters are injected automatically by Spring.

Things to Note
* Name of Bean minus Suffix HealthIndicator (metricBased) is what is reported as HealthIndicator Name.
* AutoConfiguration of MetricReaders, PublicMetrics or Configuration could be disabled. If this is the case, either
enable AutoConfiguration or manually instantiate MetricReaders, PublicMetrics etc
* PublicMetrics interface can be expensive depending upon the number of metrics being maintained. Use it only if
Custom MetricReader cannot be written or Metrics are small in number.

Share this:

Like this:

Many of us are involved in writing scripts, be it for development or testing or deployment.
We make use of different scripting languages. One of them is Powershell.
As the name suggests, it’s really powerful.

You can accomplish so many things in Powershell. But what if you already have something developed in .NET and have an Assembly (remember *.dll file) available with you.

Would you like to mimmick everything in Powershell? Or, would you wish if same .NET assembly can be reused?

Why am i writing this?
I was working on Automating or writing a workflow to deploy Virtual Machines (aka, Persistent VM Role) on Microsoft Azure Cloud.
I did it using Powershell script(You can see a lot of support and sample Powershell scripts already available on MS community sites).

That became simple. However, that’s not all for me.

I am hungry :), Hunger to understand things, go till the roots.

I wanted to understand the code working behind the scenes.

Read this post further…

What are Powershell CmdLets?
In actual, Powershell cmdlets are actually exposed through .NET assemblies only. Bunch of assemblies targetting .NET framework execute to get as results which we want.

If you have worked in .NET, you would have came across Attributes. Yeah, that is how Powershell CMDLETS are exposed.

Classes and fields/parameters are attributed with CmdLet and Parameter.
That’s it. Powershell execution engine can now load these types and execute them.

Bottomline is: Cmdlets are classes annotated with Cmdlet attribute.

How to decompile?
Now, we know that it’s actually a .NET Type in .NET assembly which is getting things done. We all know how to decompile .NET assembly.
We may use 3rd Party Tools, some are free while some are not.
This is not a big deal.

However, how do you identify and locate the Assembly containing this specific CMDLET?

You may say that you are not CLR which is responsible for locating, loading and executing the types besides other things.

Then HOW, you may ask.

For this, we’ll again make use of Powershell Command Prompt.

Open up Command Prompt and execute the following command:

$commandDetailsObj=Get-Command nameOfCommand
/* where,
$commandDetailsObj is how you declare a variable in Powershell,
Get-Command is another Powershell cmdlet, gcm is an alias of this cmdlet,
and,
nameOfCommand is the name of cmdlet which you want to decompile. Say, Add-AzureAccount
*/

The above command will get the details about the cmdlet and store it in $commandDetailsObj variable.
Since cmdlet name can actually be an alias to an actual cmdlet, we keep on doing the below till we get the actual command.

Like this:

The title of this post may sound a bit strange for those who have not faced this problem but it may sound a Sweet Tune Music 🙂 to those who want to resolve this nasty error in their application.

If you fall into the latter category, you can directly jump to the Solution section though everybody is definitely welcomed to read the entire post.

What is this about?
An error which occurs when using Enterprise Library Data Access Block in instantiating a Database using factory approach.
You may have followed the msdn article to setup DataAccessBlock with the correct code and the configuration in your application but always resulting into the error when you try to instantiate a database object.

Context
Typically, software solutions are multi-layered. One of them being a Data Access Layer, aka DAL, which interacts with the Data Store(s) and performs the CRUD operations on the data in the data store. In this layer, you can either opt for ADO.Net or Enterprise Library Data Access Block to connect to Data Store (database) besides other options.

Since, the post is talking about a specific error resulted in the EntLib, lets assume that we preferred to implement DAL using EntLib Data Access Block.

Problem / Error
Activation error occured while trying to get instance of type Database, key “”
This error occurs on the below code statement, the very first statement to perform the CRUD operation into the DataStore.Database dataStore = DatabaseFactory.CreateDatabase();
or,Database dataStore = DatabaseFactory.CreateDatabase("someKey");

Cause
Enterprise library consists of number of classes in different namespaces and assemblies.
Two of them are:

Microsoft.Practices.EnterpriseLibrary.Data

Microsoft.Practices.EnterpriseLibrary.Common

The above code statement is present in the former assembly. After a series of function calls in the same assembly and the latter assembly, a function in the latter assembly tries to load the former assembly using the partial name.

Note: Loading of an assembly using Partial name
This is what leads to the error if the Enterprise libraries assemblies are GACed and not copied locally into the application directory.
Assembly with a partial name won’t be found in the GAC and then the search/probing of an assembly will continue to Local Application Directory or sub-directories with the same name or as per configuration.
Since assembly is not present anywhere else except GAC, assembly loading will fail and leading to this error.

You can see this in action by launching Fusion Log Viewer utility, which comes by default. Command is : “fuslogvw” in case yof could not locate the utility. Type the command in the Visual Studio Command Prompt.
You may need to customize the Log Viewer to log all binding to disks to view every log.

[You can opt to open this assembly into a Reflector or ILSpy and go through each code statement and function call post the above code statement to understand more.]

So, is there a solution or a workaround for the above problem?

Resolution
This problem is solvable. 🙂
Problem can be solved in many ways, you choose what suits you the best.

You can deploy the enterprise library, “Microsoft.Practices.EnterpriseLibrary.Data” locally to the applcation bin directory. [This may lead to maintaining multiple copies of the same assembly]

Another option is to have the below configuration in the application configuration file. This appears a bit clean approach but again the same configuration change has to be done at multiple places if they are using this library<runtime><assemblyBindingxmlns="urn:schemas-microsoft-com:asm.v1"><qualifyAssemblypartialName="Microsoft.Practices.EnterpriseLibrary.Data" fullName="Microsoft.Practices.EnterpriseLibrary.Data, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/></assemblyBinding></runtime>

Blog Stats

Categories

Shortcuts

Advertise YOUR BUSINESS Here

Do You want to advertise your Business here? Contact me @ dem.street@gmail.com or leave a message here. Note that this is an informative site with traffic coming from different websites. No Requests from Irrelevant sites, not suitable as per laws or obscenity etc will be entertained