Elsewhere

It’s annoying how good Google search is. Many a times I search for a song in Apple Music, don’t find it, and think it’s not there. But then I do a Google search for “<song name> iTunes” and bam! it returns me an iTunes link I can click to open the song in Apple Music. :) Neither Bing nor DuckDuckGo do this! It’s irritating because Apple Music should be doing this in the first place (it’s funny, right, that Google indexes Apple Music better than Apple itself) and one more reminder as to how google Google is for searching even with all its privacy concerns etc.

But not integrated well into the movie. They distract from the movie than add to it. Most of them seem like they are placed just because we have to place the songs somewhere.

DIdn’t like the background score much either.

The story seemed kind of directionless. It was marketed as a violent thriller of 3 sons fighting for their fathers’ empire. That fight doesn’t start until after the intermission, and even then we don’t really care for it.

The women seem to be there just for skin. Except Jyothika who has somewhat of a role, the rest are wasted. Which sucks coz they seemed interesting and just ignoring them for the three men didn’t do justice for them.

Good to see Aravind Swamy after a long time!

That’s it really. It was an ok 2.5 hours. I could have spent it watching something else more worth my time I guess … but ah well, Mani Rathnam movie, I wanted to watch it … and even though I was bored I kept on in the hopes that things might turn out to be interested. (Hint: they didn’t!).

At work we are doing some migration work and the vendor we are migrating data to has a REST API which we can talk to using curl and pass data as JSON. I spent the last two weeks creating various bash scripts that can send and receive JSON and while I did a good job (in my opinion), and learnt a lot of things (discovered jq for instance, it’s amazing!), and it was a great working with bash and sed and awk and all these *nix tools after such a long time (and all this was done on the macOS this time, so was a good way to send time on the macOS CLI too) … I now realize that doh PowerShell supports JSON and I could have used Invoke-WebRequest for all my curl calls, so I could have done the whole work in PowerShell … a much more familiar environment! In the process I could have saved some time and taken a lot less stress.

That’s why I am bummed. I am proud I did a good, but I also kind of wish I had been more aware of what PowerShell can do and taken the effort to Google a bit about it.

Thing is I have a huge soft corner for bash and all these things, so I know it’s my internal bias that just pushed me to jump at the opportunity and work with this rather than check out PowerShell. I do love me bash and sed and all those. :)

Been kind of binge watching “Nightflyers” on Netflix. The show doesn’t make much sense to me, and all the characters are kind of weird / dumb and yet I am intrigued and keep watching. There’s probably a better way to spend 10 hours of my life than watching this, but I dunno … part of me wants to see where this goes. I guess it’s because the show began on a high note, with one of the characters killing others (killing everyone maybe?) and so I want to know how that came to be. But I just can’t make sense of the actions of the characters. There’s just a lot of things – alien race, some Teke energy, some humans called L’s, computers, virtual reality, a girl who can plug into computers … it’s like someone decided to blend all these together and see what comes out of it. There’s not story or direction as such. It’s just going somewhere and the only thing keeping me interested is why the killings in the beginning happened, and that maybe all these irrational behavior is due to the alien Volcryn influence. If I had to pick a crew for an alien space expedition, this would definitely not be the bunch I go with.

I am still very surprised I didn’t just dump it. Goes to tell how a suspenseful beginning can keep you hooked. Maybe that was the idea of the writers too. :)

I ditched “Titans” after about 4-5 episodes. It was similarly pointless and I stopped caring for the characters.

Speaking of stuff worth spending time on though, I loved this podcast interview with M. Night Shyamalan. I love M. Night Shyamalan movies, and I especially loved “Unbreakable”. To me, “Unbreakable” is a story idea that I had (seriously) but much much much better executed by M. Night. For me it was just a cool idea in my head of how the world might be, but seeing it on screen was just magical. I didn’t know the movie didn’t fare that well until recently though. For me “Unbreakable” and “Signs” are two of M. Night Shyamalan’s best movies (and top in my list of favorite movies). Both are kind of similar in one level – faith, reason, why – but very different too. I haven’t seen his “The Visit”, so got to watch that now. I dunno how but I missed that out (well I know how, I lost interest in his movies after “The Last Airbender” and that TV show me made – “Wayward Pines”).

Another good podcast episode I listened to recently is this interview with Christina Warren. I had previously heard Christina on the TWiT but this was my first time hearing her being interviewed and it was a fun episode. I came across some interest Mac app suggestions too from her.

I use both Apple Music and Spotify. And I pay for both too (esp. Spotify as I prefer the higher quality music). I always felt however that the same song sounds better on Apple Music than Spotify, though until today I didn’t read more into this. Turns out that, yeah, Apple Music uses 256kbps AAC while Spotify is 320kbps Ogg Vorbis (don’t be fooled by the numbers, AAC is a better format the Ogg Vorbis so the 256kbps actually translates to something higher if we compare like for like). Am glad to hear that in a way coz Apple Music is my primary music player, but I am also bummed to realize that I may not be getting the best possible quality with Spotify.

I love Spotify for being able to discover new music. I like its UI, and I find myself turning to it when I am in the mood to discover new stuff. With Apple Music I have a bunch of playlists etc., but often I am just in the mood for someone to make a decision for me. With Spotify I can go to the Discover section and it usually points me to something good. I have discovered so much new music through that. They have great play lists, and most of the time I enjoy whatever it points me to.

Should I be cheap and use Spotify purely for discovery and actually play the discovered song in Apple Music? I guess not. That’s not a very smooth workflow. Also, Spotify isn’t bad if I am listening on speakers. It’s only when I have my good headphones on that I notice the difference. I should just remember to use Apple Music if I am on headphones. (Also, the type of the music matters. If I am listening to movie scores or something classical, then the quality matters. General pop or fusion etc. aren’t that fussy about quality).

Ideally I should be signing up for a lossless streaming service like Tidal, but that isn’t available in Dubai yet. Sucks!

Ran into an irritating problem today with my task switcher Contexts. It stopped working and there was an orange exclamation mark in the menu bar saying some application has Secure Input turned on and until I close it Contexts can’t work. Initially it told me that Firefox was responsible, and I was about to close it when I realized that whenever I switch to a different app it blames that app as having Secure Input turned on.

So clearly the issue is elsewhere. This page gives you a list of apps that can usually have Secure Input turned on. Thanks to this forum post I learnt that you can find the offending app by running the following command:

ioreg -l -w 0 | grep SecureInput

Find the process ID from the kCGSSessionSecureInputPID field. Then use a activity monitor (easier, you can sort) or the following command:

ps auxww | grep <pid>

In my case the culprit turned out to be loginwindow. I tried to kill it but the system warned me that it would log me out. I was in no mood to get logged out, so upon a whim I tried locking and unlocking the system. That worked! :)

I spent a lot of money on keyboards this past 2 months. I started with the Kinesis FreeStyle 2 Blue for Mac and while I loved it a lot I didn’t like the delays because of BlueTooth. I should have just purchased the wired version instead, but I decided to go for some other brand instead (just to try things out) and went with the even more expensive Das Keyboard 4 Professional for Mac. I had high hopes for it, being a mechanical keyboard and NKRO etc. but today I decided to return it. It is a good keyboard, mind you, but I dunno I wasn’t that blown away by it. I mean not blown away enough to justify the super high cost; and it wasn’t entirely pleasurable to type on either. As with other keyboards I’d find the space and X keys to sometimes stick – I think it’s just me, I am jinxed with these keys. I hate delays. I tend to type very fast, and the smallest of delays irritate me. And I don’t like stuck keys (no one does, I guess). Plus coming from the split keyboard design of the Kinesis FreeStyle 2, the Das Keyboard felt very cramped.

What I loved about the Das Keyboard was its integrated USB 3 hub and also the giant volume button and media keys. These were a pleasure and these are what I am going to genuinely miss.

Why am I going to miss these? Because I decided to simplify things. The whole reason for going with an external keyboard was for better ergonomics, but somehow it wasn’t working out. I don’t mind the MacBook Pro keyboard, and in fact I had gotten used to it the first few months; and while these external keyboards are way better to use over the MacBook Pro keyboard due to the extra travel etc., I realized I better get used to the MacBook Pro keyboard. For one, I will be traveling and at that point I can’t lug around additional keyboards; and for another, my desk was having a complicated look with the extra keyboard and its cable etc. If I go wireless I notice a lag. If I go wired I still notice the occasional lag, plus the keyboard wires mess up my desk. Plus, since I have a keyboard and 2 screens, I have to keep the MacBook Pro at a distance and not really use it as a 3rd screen (except for say having iTunes open on it) and I wasn’t too happy with that either.

So enough with all this mess. Keep things simple. MacBook Pro + it’s inbuilt keyboard (which needs some getting used to, granted). Plus the two screens that I already had. Plus an extra trackpad (which I already had, coz I prefer the extra trackpad so I can use my lefthand too for tracking). Plus the Logitech Mx Master 2S mouse (which too I had purchased as part of my experimenting around, and which is useful for my right hand when I need a mouse as such). It’s a good feeling to be honest. Removing the keyboard and putting the MacBook Pro back at the centre of my table kind of brings a symmetry. I use the inbuilt MacBook Pro speakers for casual music listening and now they are back to being in the centre of my desk and so the music isn’t coming from one side of the desk (where the MacBook previously was). I can use all 4 USB-C ports as the MacBook Pro is again centered and so I don’t have to worry about distance. The headphone jack too is nearby so I can skip the long extender cable I had – and that’s one less cable on my desk. Everything is great, except that I have to get used to the butterfly keyboard again. :) That takes some getting used to. I mean, I don’t mind the keyboard, and I can type reasonably well without mistakes on it (and the autocorrect helps a lot) but it is not a 100% error free and a 100% love. Something with more travel would definitely have been preferred. But well, you can’t get everything. Am pretty sure my hand is going to start hurting soon. :(

Another advantage of removing the keyboard is that I feel “free-er”. As in, now I am no longer tied to my desk or need all this extra paraphernalia. If I am bored of sitting at my desk or want to take the MacBook Pro elsewhere, I can just unhook it and go away. I don’t need to carry around the keyboard etc. coz I am not longer used to it. I have a StarTech Thunderbolt to Dual HDMI adapter that drives my two monitors, so unplug that and simply move on … whenever I need to. A very nice thing about the macOS is that when I later plug in the adapter again, it automatically places the windows back on the screens they were at. So convenient! (The window manager isn’t without its faults, but I love this feature and also the fact that even after a reboot it places everything back the way they were).

Update: Freedom is … sweaty!

This is for anyone else who Googles on “sweaty palms and MacBook Pro” and doesn’t find thing of use. Yes, there are plenty of results, but those are mostly to do with how to protect the keyboard for your sweaty palms – not about what causes them in the first place! I don’t have much laptop experience (I usually plug in a keyboard) so maybe it’s not a MacBook thing, maybe it’s all laptops, but I noticed that when I use the MacBook Pro keyboard my palms are more sweaty. After one day of using the MacBook Pro without keyboard, my palms are constantly sweating. And I know this is not a on time thing coz the previous time (a 2 week stretch) when I did us the MacBook Pro directly without any keyboard I had the same issue. At that time I Googled and convinced myself the issue is with me rather than anything else … and just moved on (got myself an external keyboard basically, later on). Today I realize it is probably coz of the keyboard.

Like I said, may not be MacBook Pro related as such. But the lack of travel of the butterfly keyboard does stress my palms and that’s probably a large contributing factor. I hate this lack of travel, and I find the layout a bit having to get used to … my fingers are definitely a lot more cranky since I started with this keyboard.

Update 2: I tried, but decided to give up. I sit long hours with the MacBook Pro so it is not good posture either for me to sit bent on it. I should ideally be staring at a monitor or the MacBook Pro raised – like I was doing so far – so why am I trying to disable myself by taxing both my neck and hands with poor posture. If I raise the MacBook Pro in the centre of my desk, and use the keyboard again with it, neither am I hurting my fingers nor am I hurting my neck. So this post eventually turned out to be about nothing as I am back to where I started. :D

I need to deploy a language pack for one of our offices via ConfigMgr. I have no idea how to do this!

What they want is for the language to appear in this section of Office:

I don’t know much of Office so I didn’t even know where to start with. I found this official doc on deploying languages and that talked about modifying the config file in ProPlus.WW\Config.xml. I spent a lot of time trying to understand how to proceed with that and even downloaded the huge ISOs from VLSC but had no idea how to deploy them via ConfigMgr. That is, until I spoke to a colleague with more experience in this and realized that what I am really after is the Office 2016 Proofing Toolkit. You see, language packs are for the UI – the menus and all that – whereas if you are only interested in spell check and all that stuff what you need is the proofing tools. (In retrospect, the screenshot above says so – “dictionaries, grammar checking, and sorting” – but I didn’t notice that initially).

So first step, download the last ISO in the list below (Proofing Tools; 64-bit if that’s your case).

Extract it somewhere. It will have a bunch of files like this:

The proofkit.ww folder is your friend. Within that you will find folders for various languages. You can see this doc for a list of language identifiers and languages. In the root of that folder is a config.xml file with the following –

By default this file does nothing. Everything’s commented out as you can see. If you want to additional languages, you modify the config.xml first and then pass it to setup.exe via a command like setup /config \path\to\this\config.xml. The setup command is the setup.exe in the folder itself.

Step 2 would be to copy the setup.exe, setup.dll, proofkit.ww, and proofmui.en-us to your ConfigMgr content store to a folder of its own. It’s important to copy proofmui-en.us too. I had missed that initially and was getting “The language of this installation package is not supported by your system” errors when deploying. After that you’d make a new application which will run a command like setup.exe /config \path\to\this\config.xml. I am not going into the details of that. These two blog posts are excellent references: this & this.

At this point I was confused again, though. Everything I read about the proofing kit made it sound like a one time deal – as in you install all the languages you want, and you are done. What I couldn’t understand was how would I go about adding/ removing languages incrementally? What I mean is say I modified this file to add Spanish and Portugese as languages, and I deploy the application again … since all machines already have the proofing kit package installed, and it’s product code is already present in the detection methods, wouldn’t the deployment silently ignore?

To see why this doesn’t make sense to me, here are the typical instructions (based on above blog posts):

Copy to content store

Modify config.xml with the languages you are interested in

Create a new ConfigMgr application. While creating you go for the MSI method and point it to the proofkit.ww\proofkitww.msi file. This will fill the MSI detection code etc. in ConfigMgr.

After that edit the application you created, modify the content location to remove the proofkit.ww part (because we are now going to run setup.exe from the folder above it), and modify the installation program in the Programs tab to be setup.exe /config proofkit.ww\config.xml.

Notice how the uninstall program and detection method both have the MSI code of the MSI we targeted initially. So what do I do if I modify the config.xml file later and want to re-deploy the application? Since it will detect the MSI code of the previous deployment it won’t run at all; all I can do is uninstall the previous installation first and then re-install – but that’s going to interrupt users, right?

Speaking to my colleagues it seems the general approach is to include all languages you want upfront itself, then add some custom detection methods so you don’t depend on the MSI code above, and push out new languages if needed by creating new applications. I couldn’t find mention of something like this when I Googled (probably coz I wasn’t asking the right questions), so here goes what I did based on what I understood from others.

As before, create the application so we are at the screenshot stage above. As it stands the application will install and will detect that it has installed correctly if it finds the MSI product code. What I need to do is add something extra to this so I can re-deploy the application and it will notice that inspite of the MSI being installed it needs to re-install. First I played around with adding a batch file as a second deployment type after the MSI deployment type, having it add a registry registry. Something like this:

1

2

3

4

5

@echooff

SETKEY=OfficeProofingKit2016

SETVER=1

reg add HKLM\Software\MyFirm/v%KEY%/tREG_SZ/d%VER%

This adds a key called OfficeProofingKit2016 with value 1. Whenever I change my languages I can update the version to kick a new install. I added this as a detection to the batch file detection type, and made the MSI deployment type a dependency of it. The idea being that when I change languages and update the batch file and detection method with a new version, it will trigger a re-run of the batch file which will in turn cause the MSI deployment type to be re-run.

That turned out to be a dead end coz 1) I am not entirely clear how multiple deployment types work and 2) I don’t think whatever logic I had in my head was correct anyways. When the MSI deployment type re-runs wouldn’t it see the product is already installed and just silently continue?! I dunno.

Fast forward. I took a bath, cleared my head, and started looking for ways in which I could just do both installation and tattooing in the same batch file. I didn’t want to go with batch files as they are outdated (plus there’s the thing with UNC paths etc). I didn’t want to do VBScript as that’s even more outdated :p and what I really should be doing is some PowerShell scripting to be really cool and do this like a pro. Which led me to the PowerShell App Deployment Toolkit (PSADT). Oh. My. God. Wow! What a thing.

The website’s a bit sparse on documentation but that’s coz you got to do download the toolkit and look at the Word doc in there and examples. Plus a bit of Googling to get you started with what others are doing. But boy, is PSADT something! Once you download the PSADT zip file and extract its contents there’s a toolkit folder with the following:

This folder is what you would copy over to the content store of whatever application you want to install. And into the “files” folder of this is where you’d copy all the application deployment stuff – the things you’d previously have copied into the content store. You can install/ uninstall by invoking the Deploy-Application.ps1 file or you can simple run the Deploy-Application.exe file.

Notice I changed the deployment type to a script instead of MSI, as it previously was. The only program I have in that is the Deploy-Application.exe.

And I changed the detection method to be the registry key I am interested in with the value I want.

That’s all. Now for the fun stuff, which is in the Deploy-Application.ps1 file.

At first glance that file looks complicated. That’s because there’s a lot of stuff in it, including comments and variables etc., but what we really need to concerting ourselves with is certain sections. That’s where you set some variables plus do things like install applications (via MSI or directly running an exe like I am doing here), do some post install stuff (which is what I wanted to do, the point for this whole exercise!), uninstall stuff etc. In fact, this is all I had to add to the file for my stuff:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

[string]$appVendor='Microsoft'

[string]$appName='Office Proofing Kit 2016'

[string]$appVersion='2016'

[string]$appArch=''

[string]$appLang='EN'

[string]$appRevision='01'

[string]$appScriptVersion='1.0.1'

[string]$appScriptDate='02/06/2019'# mm/dd/yyyy

[string]$appScriptAuthor='Rakhesh Sasidharan'

[string]$appRegKey='HKLM\SOFTWARE\MyFirm\Software'

[string]$appRegKeyName='OfficeProofingKit2016'

[string]$appRegKeyValue='2'# !!when you change this version be sure to update the detection method!!

If(-not$useDefaultMsi){Show-InstallationPrompt-Message'New languages were successfully added to your Office 2016 installation. Please close and open Word, Outlook, etc. for the new languages to be enabled.'-ButtonRightText'OK'-IconInformation-NoWait}

That’s it! :) That takes care of running setup.exe with the config.xml file as an argument. Tattooing the registry. Informing users. And even undoing these changes when I want to uninstall.

I found the Word document that came with PSADT and this cheatsheet very handy to get me started.

Update: Forgot to mention. All the above steps only install the languages on user machines. To actually enable it you have to use GPOs. Additionally, if you want to change keyboard layouts post-install that’s done via registry key. You can add it to PSADT deployment itself. The registry key is HKEY_CURRENT_USER\Keyboard Layout\Preload. Here’s a list of values.

Quick shoutout to this blog post by Tony (cached copy as I can’t access the original). The cmdlet has a -ContentFilter switch where you can specify the date and other parameters by which you can filter out what is exported. Very irritatingly the date parameter expects to be in US format. And even if you specify it in US format to begin with, it converts it to US format again switching the date and month and complaining if it’s an incorrect date. Thanks to this blog post which explained this to me and this blog post for a super cool trick to work around this. Thank you all!

No original contribution from my side here. It’s just something I put together from stuff found elsewhere.

I wanted to run this as a cron job on ESXi periodically but apparently that’s not straightforward. (Wanted to run it as a cron job because I am not sure the queue settings too can be set as a default for new datastores). ESXi doesn’t keep cron jobs over reboots so you to modify some other script to inject a new crontab each time the host reboots. I was too lazy to do that.

Another plan was to try and run this via PowerCLI as I had to do this in a whole bunch of hosts. I was too lazy to do that either and PowerCLI seems a kludgy way to run esxcli commands. Finally I resorted to plink (SSH was already enabled on all the hosts) to run this en-masse:

This feels like cheating, I know. This requires SSH to be enabled on all hosts. This assumes I have put the script in some common datastore accessible across all hosts. I am using PowerShell purely to loop and read the contents of a text file consisting of “hostname: password” entries. And I am using plink to connect to each host and run the script. (I love plink for this kind of stuff. It’s super cool!) It feels like a hotch-potch of so many different things and not very elegant but lazy. (Something like this would be elegant. Using PowerCLI properly; not just as a wrapper to run esxcli commands. But I couldn’t figure out the equivalent commands for my case. I was using FC rather the SCSI).

I don’t know if I will ever use this at work but I was reading up on Let’s Encrypt and ACME Certificate Authorities and decided to play with it for my home lab. A bit of work went behind the scenes into the above stuff so here’s some notes.

First off, ACME Certificates are all about automation. You get certificates that are valid for only 90 days, and the idea is that every 60 days you renew them automatically. So the typical approach of a CA verifying your identity via email doesn’t work. All verification is via automated methods like a DNS record or HTTP query, and you need some tool to do all this for you. There’s no website where you go and submit a CSR or get a certificate bundle to download. Yeah, shocker! In the ACME world everything is via clients, a list of which you can find here. Most of these are for Linux or *nix, but there are a few Windows ones too (and if you use a web server like Caddy you even get HTTPS out of the box with Let’s Encrypt).

To dip my feet I started with Certify a Web, a GUI client for all this. It was fine, but didn’t hook me on much and so I moved to Posh-ACME a purely PowerShell CLI tool. I’ve liked this so far.

Apart from installing the tool via something like Install-Module -Name Posh-ACME there’s some background work that needs to be done. A good starting point is the official Quick Start followed by the Tutorial. What I go into below is just a rehash for myself of what’s already in the official docs. :)

Requesting a certificate via Posh-ACME is straightforward. Essentially, if I want a certificate for a domain ’sso.raxnet.global’ I would do something like this:

At this point I’d have to go and make the specified TXT record. The command will wait 2 mins and then check for the existence of the TXT record. Once it finds it the certificates are generated and all is good. (If it doesn’t find the record it keeps waiting; or I can Ctrl+C to cancel it). My ACME account will be admin@rakhesh.com and the certificate tied to that.

If I want to be super lazy, this is all I need to do! :) Run this command every 60 days or so (worst case every 89 days), update the _acme-challenge.domain TXT record as requested (the random value changes each time), and bam! I am done.

If I want to automate it however I need to do some more stuff. Specifically, 1) you need to be on a DNS provider that gives you an API to update its records, and 2) hopefully said DNS provider is on the Posh-ACME supported list. If so, all is good. I use Azure DNS for my domain, and instructions for using Azure DNS are already in their documentation. If I were on a DNS provider that didn’t have APIs, or for whatever reason if I wanted to use a different DNS provider to my main domain, I can even make use of CNAMEs. I like this CNAME idea, so even though I could have used my primary zone hosted in Azure DNS I decided to make another zone in Azure DNS and down the CNAME route.

So, here’s how the CNAME thing works. Notice above Posh-ACME asked me to create a record called _acme-challenge.sso.raxnet.global? Basically for every domain you are requesting a certificate for (including a name in the Subject Alternative Name (SAN)), you need to create a _acme-challenge.<domain> TXT record with the random challenge given by ACME. However, what you can also do is say have a separate domain like for example ‘acme.myotherdomain’ and I can pre-create CNAME records like _acme-challenge.<whatever>.mymaindomain -> <whatever>.myotherdomain such that when the validation process looks for _acme-challenge.<whatever>.mydomain it will follow it through to <whatever>.myotherdomain and update & verify the record there. So my main domain never gets touched by any automatic process; only this other domain that I setup (which can even be a sub-domain of my main domain) is where all the automatic action happens.

In my case I created a CNAME from sso.raxnet.global to sso.acme.raxnet.global (where acme.raxnet.global is my Azure DNS hosted zone). I have to create the CNAME record before hand but I don’t need to make any TXT records in the acme.raxnet.global zone – that happens automatically.

To automate things I then made a service account (aka “App registrations” in Azure-speak) whose credentials I could pass on to Posh-ACME, and whose rights were restricted. The Posh-ACME documentation has steps on creating a custom role in Azure to just update TXT records; I was a bit lazy here and simply made a new App Registration via the Azure portal and delegated it “DNS Contributor” rights to the zone.

Not shown in the screenshot, after creating the App Registration I went to its settings and also assigned a password.

That done, the next step is to collect various details such as the subscription ID & tenant ID & and the App Registration name and password into a variable. Something like this:

1

2

3

4

5

$azParams=@{

AZSubscriptionId=‘REPLACE ME';

AZTenantId=‘REPLACE ME';

AZAppCred=(Get-Credential)

}

This is a one time thing as the credentials and details you enter here are then stored in the local profile. This means renewals and any new certificate requests don’t require the credentials etc. to be passed along as long as they use the same DNS provider plugin.

That’s it really. This is the big thing you really have to do to make the DNS part automated. Assuming I have already filling in the $azParams hash-table as above (by copy-pasting it into a PowerShell window after filling in the details and then entering the App Registration name and password when prompted) I can request a new certificate thus:

The -PluginArgs switch passes along the arguments this plugin expects; in this case the credentials etc. I filled into the $azParams hash-table

The -DnsAlias switch specifies the CNAME records to update; you specify one for each domain. For example, in this case ‘sso.raxnet.global’ will be aliased to ‘sso.acme.raxnet.global’ so the latter is what the DNS plugin will go and update. If I specified two domains e.g. ‘sso.raxnet.global’,’sso2.raxnet.global’ (an array of domains) then I would have had to specify two aliases ‘sso.acme.raxnet.global’,’sso2.acme.raxnet.global’ OR I could just specify one alias ‘sso.acme.raxnet.global’ provided I have created CNAMES from both domains to this same entry, and the plugin will use this alias for both domains. My first example at the beginning of this post does exactly that.

That’s it! To renew my certs I have to use the Submit-Renewal cmdlet. I don’t even need to run it manually. All I need do is create a scheduled task to run the cmdlet Submit-Renewal -AllAccounts to renew all my certificates tied to the current profile (so if I have certificates under two different accounts – e.g. admin@rakhesh.com and admin2@rakhesh.com but both are in the same Windows account where I am running this cmdlet from, both accounts would have their certs renewed).

What I want to try next is how to get these certs updated with Exchange and ADFS. Need to figure out if I can automatically copy these downloaded certs to my Exchange and ADFS servers.