As a beginner it was damn hard for me to be consistent or even do 1 perfect push-up, it started with day 1 at gym where I did around 12 push-up in set1 but then the number went on reducing back as well the sets.

Below is the account per day and the number of push ups done during my initial day of work out, wanted to experience and document how the journey takes make first 20 push- up per set. Check out my first 20 push-up video uploaded on you tube.

Week1

Day 1 – Awesome 12 push-up (but then after workout while going back home couldn’t control while walking the body was tight)

Day 2 – Couldn’t do a single push up as the arms were jammed & in fact couldn’t raise up my hands, it was troubling me to even wear my upper clothes especially T-shirts.

Day 5 – Trying had every day for a single push-up, the moment I used to start after going down, used to feel so much pressure on arms & then couldn’t get up back.

Day 6 – Not even a single push-up till this day coz I could still feel the pressure on arms

Day 7 – It was recommended by the trainer to take rest & not required to do work out. I tried to do practice at home if I could achieve one but still failed.

Week 2

Day 8 – Still trying hard but can’t pull up once I go down

Day 10 – Still practicing but no nope

Day 13 – Not yet even a single perfect push-up

Day 14 – Sunday – No hope even after trying hard for this two weeks. I was feeling very ashamed and frustrated thinking what’s happening I can’t even do one?

I’ve seen people doing 20-25 push-up per set on floor and am struggling for one? Used to think what people around might be thinking as am failing to do.

Week 3

No going to gym coz had to travel outstation officially for work but at the same time had informed my trainer for guidance what to do & got some tips.

Tips was to take support of legs for the push-up & I liked this idea

Day 16 – Started with leg support and managed to do 10 half push-up

Day 18 – Day by day sincerely practicing progress in to 10x 2 sets

Day 20 – Practicing till weekend but still not a single perfect push-up but at the same time progressed in half push-up 15x 3 sets.

Day 21 – Sunday – traveling back home and couldn’t practice but was waiting for Monday back to gym.

Week 4

Day 22 – Did warm up and felt strong. Did one perfect complete push-up – smile on my face finally I did it, you see nothing is impossible as one thinks.

Day 24 – 4 complete push-up, couldn’t control my smile as had lost hope if I could even do more than 1. Gladly used to tell my colleagues and trainer on my progress

Day 26 – 7.5x set1 complete push-up & 4x set2

Day 27 – 8x set1 complete push-up & 3x set2/set3/set4

Week 5

Day 29 – Monday / 10x set1 complete push-up, 8x set 2 & 7x set 3

Day 30 – 11.5x set1, 8x set2 & 7x set3

Day 34 – 14x set1, 11x set2 & 10x set 3

Week 6 & It goes on…

Day 40 – 15x set1/2 & 14x set3

Then onwards it became consistent and I like got addicted, doing it double not only during workout timings but also getting up early before going to office.

No excuses of not doing the push-up rather I did it even when I was traveling wherever I got chance to do like late evening while am in the waiting room of train station(Howrah) or inside Airport(Bhubaneshwar) just before the boarding pass was issued didn’t care about the public audience & to my surprise few people were staring at me and some came up to me for tips to know the benefits and how to do

Later then and now the number of push-up have been increased drastically and at least feel proud whoever are doing workout at gym, am doing somewhat equivalent to them and I dototal around 90-100 push-up per day.

It took me 6 to 7 weeks to do 20x set1 and total 90-100 push-up per day & thought why not make a video for remembrance and hope it inspire/motivate others

I cannot thank enough for the support and guidance I got from my trainer where I feel proud to mention their name here below.

Let me start with the objective of Data Loss Prevention in Microsoft Exchange Server 2013(also called The New Exchange – a cloud centric messaging version) is to prevent accidental data loss sent by an email. It educates both end-users (using policy tips alert so that the confidential data isn’t leaked by accident) and Exchange administrator (make understand what risk organization carrying and how to mitigate)

I’ve scanned/read through the TechNet, TechEd (Videos), and also some Expert’s blog, to my experience it’s a wonderful feature that is gradually meeting the business needs to protect accidental leak of data via email (alone).

Who should plan or think of implementing DLP? – Ask yourselves and let me help you too.

IMHO organization banking sector, financial institutes, any other firms which are strict towards following, implement regulatory and compliance with regards to email security.

Do you suspect there might be a leak in your email transaction with regards to any financial data like SSN, Credit card details, IP Address, your permutation and combination what comes to your mind which contains in the email body as text?

Also there might be a chance of accidental confidential data loss when a user sending email to internal or external recipients when he/she didn’t intended to do so – LOL, whatever the user justification is.

Its goanna work with Outlook 2013 alone as a whole functionality depends (policy tips in particular as compared to lower versions of outlook which is not available).

The DLP rules although will work via Outlook 2007 but will lack the policy tips as feature.

Any Advantages?

Of course users will not be able to send out confidential data accidently via email

Exchange Folks can now analyze/track the no. of emails transaction and can build a report what are the confidential email transacted as per the company compliant policies. This is in turn knowledge and to make themselves aware how users are meeting the compliance of the organization.

Users are educated with the help of policy tips if they were accidently trying to leak confidential data and based on the rules to allow user to override or completely block.

You can double secure by implementing ADRMS and integrate with DLP transport rule as used in the legacy (talking about simple transport rule) version.

Even if the Outlook 2013 is in cached mode or offline the policy tips are still applied as the templates gets downloaded from the server once in a day (24 hours) to outlook as they are reachable. We can control whether to push the policy to clients or not from the server side once in a day which is scheduled by default & hard-coded (can’t be alerted).

And Disadvantages if any?

The policy tips only works with Outlook 2013 and not even in OWA 2013

The policy templates are limited as per the region/local countries and need to make one customizing as per the business needs.

Need to check with Third party vendors for policy templates if any vendor meets their org’s business requirement – you can make your own if you know how to.

Implemented DLP on Exchange online the reports are not exported in CSV

Can I compare DLP with other vendors in the market – Oh please don’t do so – IMO.

I tried initially to check with other vendors just to research as they got the same feature so called “Data Loss Protection” but you know what they will WIN-WIN. They not only have the feature alone but as whole suite like ENDPOINT / GATEWAY / NETWORK / STORAGE protection what IMO sounds good and involves great investment & add s complexity (Meaning additional stuffs to manage) to your environment.

There are vendor who are specializing individual product and in no means you should be surprised or attracted towards feature like the content detection engine/functionality one of the areas where I got impressed. MS has just began using DLP in Microsoft Exchange 2013 and has a long way to go. Also FYI the other vendors too have drawbacks when it comes to comparison with Exchange DLP the one alone which has a direct integration of Outlook 2013 with Exchange 2013 and managing under the hood using common EAC console.

Why DLP and not Transport rule? – Here is a bit more of technical and might be of an interest to Exchange Folks.

If I start writing it won’t end & TechNet is the right source to deep dive more precisely and consulting MCS or people who are Exchange Experts. I will highlight some of the important points which makes sense to know at this moment.

Although it is built on Transport rule which is also very similar to Outlook rules as well, DLP is more intelligent which not only detects the keywords but also reads the attachments which might contain the confidential data. It works with Transport rule initially and then starts its intelligence by detecting contents and attachment to match the policy templates used of in-built/custom or imported by third party vendor as per the business compliance. It not only helps in protecting the data but also helps administrators understand the level of risk the organization is carrying.

By implementing DLP administrator not only can alert end-users with policy tips in Outlook 2013 and prevent(sure you can also configure override setting) data leak accidently but also capture no. of incidents happened, track who sent the emails and how many times it was based on the policy template settings. You then have auditing which is nothing but sending the incident report to the configured user/group to check exactly who it matched the policy templates for example detecting the credit card numbers mentioned in the message body/attachments, the matched policy name, the values it found like 5432 XXXX XXXX XXXX (now X equals to some number). Now here is the great deal what if the user entered 1111 XXXX XXXX XXXX, the DLP is so smart that it knows the credit card numbers will never start with 1 and hence it will not prevent the user to send email. You can develop your own template to make such intelligence to search and detect.

You can simply implement DLP rules to some users in test mode doing which users are not aware of the tracking and auditing done at the transport level and later can enforce the same. You could also export the statistics in to csv to create reports and dashboards.

May I know how it works now?

It works again as mentioned above on the transport rule with additional detection mechanism based on the policy template which are classified, the available rules and configured with.

The templates are nothing but the xml files which can be also encrypted, there will be some if you got from some vendor or make your own.

I would recommend you all to go through & read the links as it contains valuable information on DLP with Microsoft Exchange 2013

Hope it was informative.

]]>https://charlesgate86.wordpress.com/2014/02/06/a-walkthrough-data-loss-prevention-in-microsoft-exchange-server-2013/feed/1charlesgate86Single Sign On using MS Directory Synchronization Tool – Enabling Password Synchttps://charlesgate86.wordpress.com/2014/02/02/single-sign-on-using-ms-directory-synchronization-tool-enabling-password-sync/
https://charlesgate86.wordpress.com/2014/02/02/single-sign-on-using-ms-directory-synchronization-tool-enabling-password-sync/#respondSun, 02 Feb 2014 18:02:15 +0000http://charlesgate86.wordpress.com/?p=1168Continue reading →]]>It all started with designing Hybrid project for one of my client where I was supposed to plan for single sign on and longtime back there was this feature extended to DirSync tool called as “Enable Password Synchronization” which came to my mind.

I was planning to use ADFS for single sign on but soon realized to use the feature of DirSync and minimize the complexity and cost of implementing ADFS on-premise.

Below are some important points need to look at and consider while you design for SSO as it helped me to focus.

Prerequisites:

Make sure at least you have Office 365 Midsize Business subscription plan to integrate on-premise AD with azure AD on cloud.

DirSync tool version must be at least minimum 6382.0000 and above to sync password from on-premise to azure AD on cloud.

Network connectivity and credentials with appropriate permission is required to sync password using DirSync tool from on-premise to azure AD on cloud.

Important points to note:

Additional security is applied to the hash value of the password before it leaves on-premise and synchronizes to azure AD on cloud

Password sync is one way from on-premise to azure AD cloud and cannot be reversed vice versa except the write-back attribute with the help of two way synchronization feature.

Password synchronization frequency differs from actually AD object replication (which can be scheduled) from on-premise to azure AD on cloud further to which it gets overwritten.

All users’ passwords are synchronized to azure AD on cloud using DirSync tool and you cannot explicitly define which user’s passwords to synchronize.

How it works:

So what happens when you actually change the password of a user in on-premise having DirSync tool with password sync enabled.

You change the password of the user

The password sync feature detects any changes and synchronizes the changed password, within a minute.

If the password sync was not successfully due to connectivity (or any other) issues the sync feature will again try automatically for the same user.

If there is any error during synchronization for sure it will log an event ID and so that we can troubleshoot further to why it has failed

Once the password is successfully synched to azure AD on cloud the online users will be able to login on to their mailboxes without any issues and the experience is seamless as both the on-premise and cloud azure AD has unique credentials.

Internet email is received on port 25 of Frontend transport service running on CAS server and then it proxies to the Transport component of mailbox server on port 2525, the transport component processes and routes it to the transport delivery on server(mailbox server) where the mailbox is active. Mailbox transport service then listens/receives email on port 475 and delivers email local active mailbox database.

Scenario 2 – Incoming mail on two multi-roles

Internet email is received to Hardware Load balancer/NLB on port 25 and at the backend which ever CAS server is available it receives email, further it delivers to the one of the CAS server’s frontend transport service on port 25 in this case meaning as per the slide it choose server2. Here we notice the recipients are two sitting individual on both the server, since CAS is nothing but stateless and proxies the request. It then passes the email to Transport service component of the mailbox server(server2) before it sends to the transport service(of mailbox server) it checks the recipient type if it is mailbox or mail enabled, if mailbox then its versions, no. of recipients, distribution group, so on and accordingly it routes to the best available transport service of any mailbox server(in this case it delivered on the server1 CAS/MBX server, it could have equally delivered to its local server1 also but it doesn’t matter at all, what CAS server looks is for the Transport service of the mailbox server locally or remotely which ever it finds and best available).

Transport service on Server1 is going to categorize the message and checks there are two recipients, located on two different mailbox server active databases, bifurcate that into 2 copies and submits that to the mailbox transport service on each mailbox server that is local to the active database copy for message delivery.

Mailbox transport service then does the content conversion and delivers the copy to the local active mailbox database via RPC.

Scenario 3 – Originating mail on two multi-roles

Let us know consider the same scenario like second instead of mail coming on premise we will see email going out of premises originating from mailbox server roles.

Message originating from server1 send email to 3 recipients (one on the same server1, second on the server2 and third on the internet)

User when tries to send an email the mailbox submission service submits the message to the mailbox transport service via RPC, once the transport service on mailbox server1 receives the message it will then choose any of the local or remote mailbox server transport service. In this scenario the message from Transport service of server1 connects to the transport service of server2, transport service on mailbox server2 categorizes and sees there are three recipients (2 internal and 1 external), bifurcates the recipients and delivers(one on server1 FE transport service, one on server1 mailbox transport service and local server mailbox transport service) the message accordingly. Then the mailbox transport service will deliver the message to the local active mailbox database copy via RPC and to the FE service on Server1(assuming that we configured on send connector with proxy enabled server1 for outgoing) for external delivery.

Scenario 4 – Incoming to DG on separated roles

Here we have now four sites having 1 CAS and 1 MBX server as separate role on each sites.

So now internet users wants to target this four recipients sitting on each site mailbox server sending email.

Having MX pointed to one of the sites and behind the CAS if there is load balancer it will deliver to one of the available CAS servers to that particular site. In this scenario let us choose the third CAS right corner of the slide.

Frontend transport receives that message, it checks there are multiple recipients having mailbox enabled which needs to be delivered, so it choose the local site available transport service as its best for that particular CAS server and delivers it to the transport service on the mailbox server to that local site. The transport service is then going to bifurcate that message, will create 4 copies and delivers that to the mailbox transport component (one locally and 3 to the remotel site mailbox server transport service). Then finally the mailbox transport service on each mailbox server delivers to the local mailbox database copy via RPC.

Scenario 5 – Incoming mail to legacy mailbox

So the fifth scenario is very much similar to the fourth but what change happens here is there comes the fifth site having Exchange 2010 HUB / MBX server.

Similarly as mentioned above the message coming from internets delivers the message to the CAS2013 on site 3 and the process is almost same until it delivers to the mailbox transport component of mailbox server site 3 and see what’s next further.

So now the transport service instead of creating 4 copies it will now create 5 copies, now the first four copies whose mailbox is on Exchange 2013 server it will deliver same as mentioned above in scenario 4 but for fifth copy its final delivery seems to be different from other than 4 copies (DAG delivery group) that is mailbox server delivery group (Exchange 2010 HUB server).

Since the mailbox transport doesn’t connects directly to the 2010 mailbox server hence it will route the message to the 2010 HUB server on the fifth site. Then the transport service of the Exchange 2010 HUB server will deliver the email to the final delivery that is the 2010 mailbox server.

6 – Client submission to single namespace

In this scenario the user Diana has her mailbox to be located at site A mailbox server1. She is a roaming user and travels often now not it happens to be in another site B for some work and wants to send email to the internet. Let’s see how it goes.

When Diana accesses here mailbox she connects to the local site CAS server(Site B) and tries to send email, the CAS server checks the mailbox location where it belongs and since it finds the users in mailbox server1 in site A, the front end transport service of CAS on site B directly connects to the transport service on site A mailbox server1. Since it has already authenticated at site B CAS server the transport service on mailbox server1 simply sends out the email as front end proxy to its local site A CAS server(as configured in send connector) frontend transport service and from their it goes out to the internet from site A.

7 – Client submission for legacy mailbox

Ok, now again similar scenario as sixth but instead of the mailbox version 2013 it is now Exchange 2010.

When Diana accesses her mailbox she would be connecting to site B 2013 CAS server, frontend transport service will authenticates and lookup for the location of the mailbox and its version. Since the CAS FE doesn’t directly talks to the mailbox server of Exchange 2010 MBX it will then connects to the Exchange 2013 mailbox server to its local site’s transport service.

The transport service on Exchange 2013 mailbox server on site B then categorizes the message and connects to the 2010 HUB server on site B based on the AD cost and then it will deliver the email to the mailbox on site A HUB to MBX 2010.

In Exchange 2010 when the message was sent from one HUB server to relay out to the next (first) hop the shadow queue was used to generate on the source (previous hop) server but in 2013 the shadow queue is created on the first hop right from the server where the mail is generated for guaranteed redundancy.

In 2013 now DAG is the transport boundary for high availability as compared to transport dumpster which was single point of failure meaning if the shadow queue on which mail.que was generated and in the event of lossy failover and Transport dumpster on which it is configured(HUB) failed too then you could not recover email.

So now when DAG is a HA boundary any message coming into DAG group now the message is queued not only in local site shadow queue but also remote site meaning in the event of local site failure you can resubmit the email at the time of failover or manually mounting the databases. Resubmit is also possible by doing manually using PowerShell command.

So that HA queue is nothing but called safety net which was introduced in 2010 exchange online and revealed it later with Exchange 2013 as new feature.

Safety Net retains data for a set of period of time (Time bases, default is 2days), regardless of whether the message has been successfully replicated to all database copies or delivered to final destination.

FYI – Safety Net period should be at least equal or greater than your LAGGED database time to prevent data loss.

Exchange 2010 Client queries internet DNS for autodiscover.contoso.com and connects to internet facing site Exchange 2013 CAS server and it then proxies the request to 2010 CAS server. CAS 2010 then handles the request, generates the autodiscover.xml query and response back to the clients

Exchange 2007 Client queries internet DNS for autodiscover.contoso.com and connects to internet facing site Exchange 2013 CAS server and it redirect the request to 2013 mailbox server(internet facing site). Mailbox server 2013 then handles the request, generates the 2007 autodiscover.xml and response back to the clients

Exchange 2007 Client (Non-Internet facing site) queries internet DNS for autodiscover.contoso.com and connects to internet facing site Exchange 2013 CAS server and it proxies the request to 2013 Mailbox server. Mailbox server 2013 then handles the request, generates the 2007 autodiscover.xml and response back to the clients

Exchange 2010 client(internet facing site) queries internal DNS for service connection point object that is autodiscover.contoso.com and connects to Exchange 2013 CAS server and it proxies the request to 2010 CAS server.

In this case irrespective of the mailbox hosted on Exchange 2010 mailbox server either on site A or site B the CAS 2013 proxies the request to CAS 2010. CAS 2010 then handles the request, generates the autodiscover.xml query and response back to the clients

Exchange 2007 client(internet facing site) queries internal DNS for service connection point object that is autodiscover.contoso.com and connects to Exchange 2013 CAS server and it proxies the request to 2013 mailbox server.

In this case irrespective of the mailbox hosted on Exchange 2007 mailbox server either on site A or site B the CAS 2013 proxies the request to mailbox server 2013. Mailbox 2013 then handles the request, generates the exchange 2007 autodiscover.xml query and response back to the clients

Exchange 2007/2010 client queries internet DNS for mail.contoso.com and connects to internet facing site Exchange 2013 CAS server and it redirect the request to either 2007/2010 CAS server(internet facing site) based on the mailbox version

Client queries internet DNS for mail.contoso.com and connects to internet facing site Exchange 2013 CAS server and it redirect the request to either 2007/2010 CAS server(Non-internet facing site).

What is important here to enable OA on all Exchange 2007/2010 CAS servers with NTLM authentication enabled so that it can proxy to the end point of the OA request to the other site as well. Also the FQDN must be same for the Exchange 2013/2010/2007 OA as it responds back to the client with the URL.

Exchange 2007 client (Site A legacy.mail.contoso.com) using FQDN mail.contoso.com which connects to Exchange 2013 CAS server OWA logon page, after entering the credential based on the mailbox version it then redirects the request to exchange 2007 CAS server (internet facing site) which prompts another logon OWA page for dual authentication – This was till Exchange 2013 CU1 where silent redirection was not implemented yet.

With Exchange 2013 CU2 now the silent redirection (single sign on) takes place where only once the OWA login page is displayed to the end users.

Exchange 2007 client (Site B legacy.mail.contoso.com users) using FQDN mail.contoso.com which connects to Exchange 2013 CAS server OWA logon page, after entering the credential based on the mailbox version it then redirects the request to exchange 2007 CAS server (internet facing site A) which prompts another logon OWA page for dual authentication– This was till Exchange 2013 RTM where silent redirection was not implemented yet. Further since the client is in site B the internet site A Exchange 2007 CAS server proxies the cross site request to the site B Exchange 2007 CAS server

With Exchange 2013 CU2 now the silent redirection (single sign on) takes place where only once the OWA login page is displayed to the end users.

Exchange 2007 client (Site B Europe.mail.contoso.com) using FQDN mail.contoso.com which connects to Exchange 2013 CAS server OWA logon page, after entering the credential based on the mailbox version it then redirects the request to exchange 2007 CAS server (Europe.mail.contoso.com internet facing site B) which prompts another logon OWA page for dual authentication– This was till Exchange 2013 RTM where silent redirection was not implemented yet.

With Exchange 2013 CU2 now the silent redirection (single sign on) takes place where only once the OWA login page is displayed to the end users.

Outlook Web App – Exchange 2010 coexistence with Exchange 2013

Client queries to the FQDN mail.contoso.com which connects to Exchange 2013 CAS server OWA logon page, after entering the credential based on the mailbox version it then proxies the request to exchange 2010 CAS server(internet facing site).

Client(non-internet site B) queries to the FQDN mail.contoso.com which connects to Exchange 2013 CAS server(Internet facing site) OWA logon page, after entering the credential based on the mailbox version it then does cross site proxies the request to exchange 2010 CAS server(non-internet facing site).

Exchange 2010 client using FQDN mail.contoso.com which connects to Exchange 2013 CAS server OWA logon page, after entering the credential based on the mailbox version it then redirects the request to exchange 2010 CAS server (internet facing site B) which prompts another logon OWA page for dual authentication – This was till Exchange 2013 RTM where silent redirection was not implemented yet.

With Exchange 2013 CU2 now the silent redirection (single sign on) takes place where only once the OWA login page is displayed to the end users.

Client (site B Europe.mail.contoso.com users) using FQDN mail.contoso.com which connects to Exchange 2013 CAS server OWA logon page (Site A), after entering the credential based on the mailbox version it then redirects the request to exchange 2013 CAS server (internet facing site B) which prompts another logon OWA page for dual authentication – This was till Exchange 2013 RTM where silent redirection was not implemented yet.

With Exchange 2013 CU2 now the silent redirection (single sign on) takes place where only once the OWA login page is displayed to the end users.

Outlook Web App – Exchange 2013 Only – CAS Proxies / Same Namespace

Exchange 2013 – mail.contoso.com (Internet facing Site A)

Exchange 2013 – mail.contoso.com (Internet facing Site B)

Client (site B Europe.mail.contoso.com users) using FQDN mail.contoso.com which connects to Exchange 2013 CAS server OWA logon page (Site A), after entering the credential based on the mailbox version it then redirects the request directly to exchange 2013 Mailbox server (in site B) which will over comes the loop scenario as compared to Exchange 2007 or 2010 because of the same external URL name space.

Client (internet facing site B users – Europe.mail.contoso.com) queries DNS for FQDN mail.contoso.com and connects to Exchange 2013 CAS server (internet facing site A) and it proxies the request and it proxies the request to 2013 Mailbox server. Mailbox 2013 server then proxies the request cross site to Exchange 2007 CAS server – MBX server in site B

If your Exchange 2007 users are moved from site B Europe.mail.contoso.com to mail.contoso.com Exchange 2013 server the profile might have to be reconfigured as the http redirect 451 comes here into this scenario.

Active Sync – Exchange 2010 coexistence with Exchange 2013

Exchange 2013 – mail.contoso.com (internet facing site A)

Exchange 2010 – europe.mail.contoso.com (Internet facing site B)

Client (Internet facing site A) queries DNS for FQDN mail.contoso.com and connects to Exchange 2013 CAS server and it proxies the request to 2010 CAS server.

Client (Non-internet facing site B) queries DNS for FQDN mail.contoso.com and connects to Exchange 2013 CAS server (internet facing site B) and it proxies the request to cross site 2010 CAS server on site B.

Client (internet facing site B users – Europe.mail.contoso.com) queries DNS for FQDN mail.contoso.com and connects to Exchange 2013 CAS server (internet facing site B) and it proxies the request to cross site 2010 CAS server on site B. (remember http redirect code 451 now it doesn’t exist instead it proxies with mutli namespace)

Web Services – Exchange 2007 coexistence with Exchange 2013

Exchange 2007 – legacy.mail.contoso.com (Internet facing site A)

Exchange 2007 – Europe.mail.contoso.com (Internet facing site B)

Exchange 2013 – mail.contoso.com (Internet facing site A)

So now autodiscover is responsible for giving the client web services URL and when Exchange 2007 client(site A) connect to autodiscover.contoso.com, it queries autodiscover for the right CAS server URL based on the mailbox version and users then directly connects to Exchange 2007 CAS server (Site A legacy.mail.contoso.com users) .

Exchange 2007 client(site B legacy.mail.contoso.com users) connect to autodiscover.contoso.com, it queries autodiscover for the right CAS server URL based on the mailbox version and users then directly connects to Exchange 2007 CAS server (Site A legacy.mail.contoso.com users) further the site A CAS server proxies the request to site B Exchange 2007 CAS server.

Exchange 2007 client(site B legacy.mail.contoso.com users) connect to autodiscover.contoso.com, it queries autodiscover for the right CAS server URL based on the mailbox version and users then directly connects to Exchange 2007 CAS server (Site B europe.mail.contoso.com users) .

Autodiscover is responsible here for the web services for you to give the right URL and the right direction.

Web Services – Exchange 2010 coexistence with Exchange 2013

Exchange 2010 – Europe.mail.contoso.com (Internet facing site B)

Exchange 2013 – mail.contoso.com (Internet facing site A)

Exchange 2010 client(Site A mail.contoso.com users) using FQDN mail.contoso.com which connects to Exchange 2013 CAS server based on the mailbox version it then redirects the request to exchange 2010 CAS server (internet facing site).

Exchange 2010 client (Site B europe.mail.contoso.com users) using FQDN mail.contoso.com which connects to Exchange 2013 CAS server based on the mailbox version it then cross sites the request to exchange 2010 CAS server (internet facing site B).

Autodiscover is responsible here for the web services for you to give the right URL and the right direction.

Inspired by Greg Taylor’s presentation on CAS 2013 and thought of an interest to blog the same. You can check his presentation @TechEd Session

Even in this case I wouldn’t want to give up the passion or have anything bad to say about because I already know (preparing for it & going through the pre-reading list) what it means.

I would rather to encourage thought of bringing those highly motivating words right from the Masters who have already sown & thrilled the hearts of Exchange guys by their invaluable comments & their passion.

Hear it from Masters what it takes – John Rodriguez, Andrew Ehrensing, David Zazzo & Greg Taylor

The people who come for this program are the people who really looking either to take it to the next level or fill in all the gaps, they already know exchange, and they are looking to increase their knowledge of Exchange. You don’t come here to learn, you come here to kind of improve and go beyond.

Also you are in the room with lot of peers who are at the top level in their field so instead of just being one expert in the room, one or two or three…you realize that you are just one of 15 / one of the 20 people in the room, all are operating at your level and your caliber.

You can’t get this content anywhere else, you won’t get this content anywhere else and so just by going through that hopefully make you better expert on Exchange,

Let’s get the show on the road of three weeks having Exchange loving

People who go through the program come out the other side with a far greater awareness and understanding of the product

So for example – If you already knew some client access you will learn twice as much here, if you already understood disaster recovery you would learn even more here. The idea is to take basic level content you will find in the Microsoft Certified IT Professional track and go far beyond that.

Don’t under estimate the program don’t go and think that you can cruise through things thinking that I know exchange server, I passed all MCP exams am good to go because you will be very quickly realigned and recalibrate with that

It is very intense experience the days are long the content comes at you hard and fast, it’s not the blink and you miss it but it’s a lot of content coming very quickly

First few days are settling and experience then you get in to routine and before you know you are awaken up 6:00am every day and go to bed 11:00PM every day and only you find yourself in the middle of Exchange

You know the values going to be the long term results of your projects so it’s going to be what’s the technical qualities of your deliverables is, you know it should be higher

Somebody who has been through this program represents the much more complete and professional picture to a client to a customer they understand why we make the decision or why Microsoft makes recommendation that we do

Then it is about proving your customer and having a stand proof approval right you’ve been signed off by Microsoft you passed the technical qualities you passed the bar. So the customers can feel good and get the assurance that they are working with top tier experts trained signed off by Microsoft and the product group really is big part of this, that solution should work to meet the needs of the customer

Be prepared to dedicate yourself to exchange for three weeks it’s that simple you cannot juggle this with work you cannot juggle this with family visits going out you are here for Exchange and you are going to learn Exchange and you are going to be immersed in Exchange and kind of subsumed in it.

Having the peer groups both as support system and as a sounding board is also benefit in the class itself

As part of participating in the MCM rotation really it is the access to the product group access to the community that you wouldn’t may be even know that it exist in the first place and being know you know that exist and you know that everyone is of the same technical excellence its invaluable resource I have this weird desired edge case, hey what you guys think send right and you get the mind hive, you can get all the other MCMs and all the other Rangers from Exchange 2003 to 2010 and then the next version, you know thinking about it you get a lot of expert ideas and expert opinions about it may be how to solve this edge case that you haven’t come across before

But if you dedicate yourself to it and you really do – except that is your focus for those three weeks at the end the reward is that you will be unrecognizable you won’t recognize yourself as an Exchange Professional you will grow that much am not talking about maturation or something like that I mean the content you will learn will be just staggered.

]]>https://charlesgate86.wordpress.com/2013/09/08/microsoft-certified-master-solution-architect-exchange-insiders-views/feed/0charlesgate86MCM LogoLadder Up With Me Five Steps Towards Cloud Microsoft Office 365 – Exchange Wave 15https://charlesgate86.wordpress.com/2013/08/07/ladder-up-with-me-five-steps-towards-cloud-microsoft-office-365-exchange-wave-15/
https://charlesgate86.wordpress.com/2013/08/07/ladder-up-with-me-five-steps-towards-cloud-microsoft-office-365-exchange-wave-15/#commentsTue, 06 Aug 2013 19:36:20 +0000http://charlesgate86.wordpress.com/?p=1053Continue reading →]]>Let me will walk you through five simple steps towards Cloud Microsoft O365 via this article and you will be able to experience the whole New Exchange wave 15 with minimal expense in my case approx. 102 – Rs. Only

Register your Company DNS Domain – e.g. In my case I have registered a domain called msexchangeasia.com at godaddy.com which is cheap and I got for approx. 102 Rs.

Subscribe for a free trial – Subscribing for a trial account will give you chance to experience the all New Exchange and take advantages of its features.

Configure your Domain – Once you got the trial account add and verify your company domain to the Office 365 portal.

DNS Record Update – Using automatic DNS records created by O365 portal just update at the pubic DNS registration for MX/CNAME/TXT/SRV.

Create Mailbox à Send / Receive Emails – Once you are done with the above four steps all you need to do is collaborating with colleagues and friends and enjoy the free subscription for a month.

]]>https://charlesgate86.wordpress.com/2013/08/07/ladder-up-with-me-five-steps-towards-cloud-microsoft-office-365-exchange-wave-15/feed/1charlesgate86123456789101112131516Understanding Information Technology Infrastructure Library v3 Foundation – Part 2https://charlesgate86.wordpress.com/2013/03/30/understanding-information-technology-infrastructure-library-v3-foundation-part-2/
https://charlesgate86.wordpress.com/2013/03/30/understanding-information-technology-infrastructure-library-v3-foundation-part-2/#respondSat, 30 Mar 2013 12:20:58 +0000http://charlesgate86.wordpress.com/?p=1045Continue reading →]]>Hope you enjoyed going through the first part of this series and let me continue with the next & final part of this series.

3. Service Transition:

The objective is to ensure that new, modified or services which are retired meet the expectations of the business as document in the service strategy and service design stages of the life cycle. In this stage the planning and managing services are efficiently and effectively taken care including the risk. It sets right expectation on the performance and the use of new or changed services. It practices release of planning, building, testing, evaluating and deploying new or changed services. With the introduction of new services it also takes care of the existing services minimizing the unintended consequences of change.

Let us now know in phrase about the important topics which are involved in Service Transition

Configuration Item – It is a service asset that needs to be managed in order to deliver IT services.

Configuration Management System – To manage large and complex IT services & infrastructure, service assets and configuration management requires the use of a supporting system known as the configuration management system. It holds all the information about configuration item within the designated scope. It maintains the relationship between all the service components and may also include records for related incidents, problem, known errors, changes and releases.

Service Knowledge Management System – It is the set of tools and databases that are used to manage knowledge, information and data. Many configuration items are available in the form of knowledge or information and these are typically stored in the SKMS – for example, a service level agreement, a report template or a definitive media library (I will explain you soon below).

Definitive Media Library – It is the secure library in which the definitive authorized versions of all media (Hardware) configuration items are stored and protected. It contains all master copies of controlled documentation, definitive copies of purchased software with licensed document or information.

Change Management Process – It can be anything which is in addition, modification or removal that could have an effect on IT services. It includes changes to all architectures, processes, tools, metrics and documentation as well as changes to IT services and other configuration items. It has three different type of service change.

Standard Change – A pre-authorized change that is low risk, relatively common and follows a procedure or work instruction.

Emergency Change – A change that must be implemented as soon as possible for example to resolve a major incident.

Normal Change – Any service change that is not a standard change or an emergency change.

Change Model – It is also called as Change Template predefining the steps that should be taken to handle a particular type of change in an agreed way.

Remediation Planning – Actions which are taken to recover after a failed change or release. That is the reason test plans are done to validate successful change or release.

Change Advisory Board – It is a body that exist to support the authorization of changes and to assist change management in the assessment, prioritization and scheduling of changes. It also involvement of different stake holders depending on the changes being considered.

Emergency CAB – When there is requirement of change need to be done in the services which may be delayed due to the process involved in CAB, that change could be done as quickly as possible with the help of ECAB and later the rest of the process needs to be taken care.

Release and Deploy Management – It is to plan, schedule and control the build, test and deployment of releases, and to deliver new functionality required by the business while protecting the integrity of existing services. There are four phases to release and deployment as mentioned below.

R & D Planning – This phase starts with change management authorization to plan a release and ends with change management authorization to create the release.

Release build & Test – This phase starts with change management authorization to build the release and ends with change management authorization for the baseline released package to be checked in definitive media library.

Deployment – This phase starts with change management authorization to deploy the release package to one or more target environments and ends with handover to the service operation functions.

Review and Close – Experiences and feedbacks are captured, performance targets and achievements are reviewed and lessons are learned.

Knowledge Management – It is to ensure that reliable and secure knowledge, information and data is available throughout the service lifecycle.

DIKW – For now it is just essential to know how it moves forward Data à Information à Knowledge à Wisdom.

4. Service Operation:

The purpose is to coordinate and carry out the activities and processes required to deliver and manage services at agreed level to business users and customers. It is also responsible for the ongoing management of the technology that is used to deliver and support services. The value is realized by operating the services effectively and efficiently. Also strategic objectives are ultimately realized through service operation hence making it a critical capability. The objective is to maintain business satisfaction and confidence in IT through effective and efficient delivery and support of agreed IT services. It minimized the impact of service outages on day-to-day business activities.

Let us now know in phrase about the important topics which are involved in Service Operation

Workaround – A temporary way of overcoming the difficulties in the services.

Known Error & KEDB – As soon as any diagnose is complete and particularly where a workaround has been found (even though it may not be a permanent solution), a known error record must be raised and placed in the Known Error Database so that if further incident or problem arise they can be identified and the services are restored more quickly.

Role of communications – An important principle is that all communication must have an intended purpose or a resultant action.

Incident Management – The purpose of incident management is to restore normal service as quickly as possible and minimize the adverse impact on business operations. It can be reported by anyone.

Problem Management – It is the process responsible for managing the life cycle of all problems and ITIL defines a problem as the underlying cause of one or more incidents. Problem management seeks to get the root cause of incidents, document and communicate known errors and initiate actions to improve or correct the situation. The objective is to prevent problems and resulting incidents from happening, eliminate recurring incidents and minimize the impact of incidents that cannot be prevented.

Event Management – It is the basis for operational monitoring and control. If events are programmed to communicate operational information as well as warning and exceptions, they can also be used as a basis for automating many routine operation management activities.

Request Fulfillment – It provides a channel for uses to request and receive standard services for which a predefined authorization and qualification process exists. It also provides information to users and customers about the availability of services and the procedure for obtaining them.

Access Management – It is to provide the rights for users to be able to use a service or group of services. It is therefore the execution of policies and action defined in the information security management.

5. Continual Service Improvement

It is the last but not the least and important volume, it is not actually a separate but evolves with the other above mentioned services. The purpose is to align IT services with changing business needs by identifying and implementing improvements to IT services that support business processes. It reviews, analyzes and make recommendations on improvement opportunities in each lifecycle phase. It improves IT service quality and the efficiency, cost effectiveness of delivering IT services with affecting customer satisfaction adversely, it ensures applicable quality management methods that are used to support continual service improvement, it also ensures that processes have clearly defined objectives and measurement that lead to actionable improvements.

W. Edwards Deming is best known for his management philosophy leading to higher quality, increased productivity and a more competitive position. He formulated 14 points of attentions for managers.

The PDCA (Plan, Do, Check and Act) cycle is critical at two points in CSI: Implementation of CSI and for the application of CSI to services and service management processes.

Seven Step Improvement Process – The purpose is to define and manage the steps needed to identify, define, gather, process, analyze, present and implement. It also includes the analysis of the performance and capabilities of services, processes throughout the lifecycle, partners and technology.

CSI Approach – It can be summarized as mentioned below.

What is the vision? – Business vision, mission, goals and objectives?

Where are we now? – Baseline assessment. (Assessment of the current situation)

Where do we want to be? – Measurable targets,

How do we get there? – detail the CSI plan to achieve higher quality service

Did we get there? – verify the measurement and metrics are in place to ensure that milestones were achieved.

How do we keep the momentum going? – The process should ensure that the momentum for quality improvement is maintained by assuring that changes become embedded in the organization.

Hope this was informative.

]]>https://charlesgate86.wordpress.com/2013/03/30/understanding-information-technology-infrastructure-library-v3-foundation-part-2/feed/019.017615 72.85616419.01761572.856164charlesgate86Understanding Information Technology Infrastructure Library v3 Foundation – Part 1https://charlesgate86.wordpress.com/2013/03/16/understanding-information-technology-infrastructure-library-v3-foundation-part-1/
https://charlesgate86.wordpress.com/2013/03/16/understanding-information-technology-infrastructure-library-v3-foundation-part-1/#commentsFri, 15 Mar 2013 20:57:47 +0000http://charlesgate86.wordpress.com/?p=1040Continue reading →]]>This is what I Say: – “ITIL is the one of the methodology in delivering IT Service Mgmt. for Service Providers to give value meeting the Customer’s Expectation in agreed SLA.”

I wish I could jot down what I meant in above statement, this blog page wouldn’t be enough for it.

Some time back I had this opportunity to go through this training called ITIL Foundation v3, had known its worth and why it is important to apply in our professional activities as well as our personal life too. I should thank IBM for this opportunity and availed the benefits out from it. I would like to especially thank the Trainer from whom I got this knowledge – Ganesh Shrishrimal, holds an ITIL Expert Certification with good Industry experiences.

Moreover the concepts were cleared and I realized it is very much essentials for the IT people to know this methodology and apply it to our day to day activities.

I would like to share what I have learned and hope it will be informative to you. I will not take you through the whole syllabus but to the important topics which is essential to know.

To start with let us know what ITIL is and why we need to practice this in an organization and where it came from…?

The UK government’s Central Computer and Telecommunications Agency in 1980 developed a set of recommendations, standards, and framework for an organization to improve its IT Service Management independently. ITIL stands for Information Technology Infrastructure Library, it is a collection of books that covers specific best practices with IT Service Management. ITIL has been updated with its versions from time to time as required and currently V3 is the latest.

ITIL has five major volumes as below mentioned and I will take you through the overview what I know and actually like about it.

Service Strategy

Service Design

Service Transition

Service Operation

Continual Service Improvement

Let’s understand one by one a bit in details – Read from the Service Provider perspective.

1. Service Strategy:

It is known as conceptualization of value creation, as said this was the important point what I understood and it means a lot. It talks about the ability to create value which is created and delivered. A clear identification of the services to the customer who use them. It precisely clarifies how services will be delivered and funded to whom it will be delivered and for what purpose. It helps in forecasting the means to understand the organization capabilities to deliver the service. It helps to know which services will achieve the strategy, what level of investment will be required, at what level of demand and the means to ensure a working relationship exists between the customer and service provider. There are different type of customers and its businesses but here the strategic part would be for service provider deliver services meeting customer business outcomes.

Let us now know in phrase about the important topics which are involved in Service Strategy.

There are two important aspects which needs to be considered for the value creation that is combination of utility (fit for purpose) and Warranty (Secure)

Service Catalog – Offerings made known towards the customer

Service Portfolio – A set of services that is managed by a Service Provider

IT Governance – It is to ensure there is fairness, control, transparency and accountability of the services provided to the customer

Business Case – It also called business justification which supports and help in planning tool that projects the likely consequence of business action.

Risk Management – Uncertainty of outcome whether a positive opportunity or negative threat. It is also something the term called “that might happen”.

Service Provider Type – There are 3 types of service providers called Internal, Shared & External

Patterns of Business Activity – It influences customer behavior by differential charging, penalty, reward and timeouts.

Business Relationship Management – It establishes and maintain a business relationship between the service provider and the customer based on the understanding and its business needs.

There is so much to talk about on above topics of which I do not want go in depth here but would move on to the next phase i.e. Service Design

2. Service Design

The purpose is to design IT services, together with the governing IT practices, process and policies to realize the service provider’s strategy and to facilitate the customer’s delivery and satisfaction. The main objective here is to reduce the Total Cost of Ownership, improve quality of services, easier implementation of new or changed services. It looks at identifying, defining and aligning the IT solution with the business requirement.

Service design is not something which is planned in isolation/individual element rather it considers overall service like the management information tools, the architects, the technology, the service management process and the necessary measurements and metrics to ensure not only the functional elements are addressed by design but also that all of the management and operational requirement are addressed as fundamental part of the design and are not added as an afterthought.

Let us know in phrase about the important topics involved in Service Strategy.

4 P’s – You must have already heard about the 3 P’s but here comes the all 4 P’s which talks about People, Process, Products(Technology) & Partners(Suppliers). It is very much self-explanatory, implementing ITIL is all about preparing, planning the effective and efficient use of the 4 P’s.

Service Design Package – It is the document defining all aspect of an IT service and requirement through each stage of its life cycle. It is an output of a service design one or more combined phase which goes as an input to Service Transition.

Service Level Management – The purpose is to ensure that all current and planned IT services are delivered to agreed achievable targets. It ensures that IT and customers have a clear and unambiguous expectation of the level of service to be delivered. It includes three types of agreements as mentioned below.

SLA – Service Level Agreement (between SP & Customer)

OLA – Operational Level Agreement (between Internal Team of SP with different customers)

Contracts – Underpinning (between SP & Supplier)

Service Level Agreement Monitoring – It charts the details performance against the SLA targeted together with details of any trend or specific actions being undertaken to improve service quality. The SLAM reporting mechanisms, intervals and reports formats must be defined and agreed with the customers.

Service Catalog Management – It maintains a single source of consistent information on all services currently being offered to customer. The objective is to reflect the current details, status, interfaces and dependencies of all services that are being run or being prepared to run, in the live environment according to the defined policies.

Availability Management – It ensures the level of availability delivered in all IT services meets the agreed availabilities needs or service level targets in a cost-effective and timely manner. It includes two types of activities called Reactive (Monitoring/Measuring/Analyzing) & Proactive (planning/designing).

FARMS – You better know the full form for this as Fault tolerance, Availability, Reliability, Maintainability and Serviceability.

Information Security Management – This is to align security with business security and ensure that confidentiality, integrity and availability of the organization assets, information, data & IT services always meets the agreed needs of the business

Supplier Management – It is to obtain value for money from suppliers and to provide seamless quality of IT service to business by ensuring that all contracts and agreements with the suppliers support the needs of the business and that all suppliers meet their contractual commitments.

Capacity Management – It is to ensure the capacity of IT service and the IT infrastructure meets the agreed capacity and performance related requirement in a cost effective and timely manner. It is concerned with meeting both the current and future capacity and performance needs of the business.

IT Continuity Management – It is to support the overall business continuity management process by ensuring that by managing the risks that could easily affect IT services, the IT service provider can always provide minimum agreed business continuity related service level.

Business Impact Analysis – The purpose is to quantify the impact to the business that loss of service would have. This could be hard (financial loss) or soft impact (public relation). The BIA will identify the most important services to the organization and will therefore be a key input to the strategy, it is also called as business justification.

Design Coordination – It coordinates all design activities across projects, changes, suppliers and support teams and manages schedules, resources and conflicts where required.