That’s WAY too much like work. If only I could get the entire list at once…

Well, actually, I can.

The unique job name that I copied out of the error emails for each line is also the name used for the directory containing that job’s processed files on our processing server. I can build the query using the name of the directory, instead of copying each problem job name out of the error email.

But how do I know which directories contain jobs that generated an error email?

Actually, it’s better not to know. We’re better off running queries against ALL the jobs – that way, we can catch all the jobs that failed to get ingested into the full-text indexing/search app we use, even if the respective error email goes missing.

So to create a script that runs a query against every job processed in a day:

So instead of taking four steps for each line of the query script, I can create the entire query script using just three steps. PowerShell FTW!

But That’s Not All! You Also Get…

Sure, it’s great that I no longer have to copy lines out of individual emails to create a hits query script for all the jobs processed in a given day, but what about creating the addonly query to ingest the missing jobs into Solr?

Well, as it happens, our Solr services were down on 9/16, so all the jobs failed and needed to be added. Also as it happened, there were 121 jobs processed on 9/16, so the hits query was 121 lines long.

The MEMBERS parameter in the Solr command line for the hits query corresponds to a MEMBERS.TXT file that contains web menu lines used by our web app. Each line of an addonly query uses the same format as the lines in the MEMBERS.TXT file.

So, to create the addonly query, I opened a New (blank) script file tab in the ISE, then entered:

But Wait! There’s More!

So I’ve cut down the process of creating these Solr scripts from 4 steps per line to 2 or 3 steps for the whole script – but what about the output?

Previously, I was watching the results of each query, and logging each result in our trouble ticket system. I built in a 15-second delay (using Start-Sleep 15) for each line, and bounced back from our trouble ticket system (in a web browser on my PC’s desktop) to the processing server where I had to run the query.

Again, that’s WAY too much like work.

The results of each hits or addonly query are logged to individual text files in a directory on the processing server. This directory is not shared – however, the processing server can send email using our processing mail server.

So:

I connected to the processing server (using Remote Desktop),

ran the hits query (to verify that all the jobs needed to be ingested using the addonly query)

ran the addonly query and watched as each job seemed to be added successfully, then

ran the hits query again (starting at 5:45 PM) to verify successful ingestion

I then used PowerShell to create and send a email report of the results:

Shazam! I (and the other tech in this project) got a nice summary report, by the numbers.

What’s next?

Well, these were all commands entered at the PowerShell command line, either in the ISE (on my desktop) or in a regular PowerShell prompt (on the processing server). One obvious improvement would be to create a couple of script-building scripts (how meta!) that I (or someone else) could run to create the query scripts, and a separate script (to be run on the processing server) to generate the summary email.

What if (as is usually the case) only a few of the jobs need to have addonly queries created to be re-ingested? Well, the brute-force way would be to create the addonly query with all the jobs included, then manually edit it, deleting all the lines where the initial ingestion was a success.

But the slick way would be to scan the query results log files, get the job numbers of the jobs that failed to ingest, and pull only the corresponding lines from the MEMBERS.TXT file.

(Spoiler: one way would be to append the name of each failed job to a single string, then get the contents of MEMBERS.TXT, extracting the lines that -match the string of failed jobs, and use those lines to create the addonly query.

It might be faster, though, to hash the lines of MEMBERS.TXT with the job number as the key and the entire line as the value, then return the entire line corresponding to each failed job).

You can press Enter after a “|” to start the next section of pipe on the next line, you can press Enter after a “,” when entering a list of items, you can press Enter after an opening bracket “{” when starting into a script block, and you can press Enter immediately after entering a “`” (a backtick) anywhere within a line (except inside a string literal).

But if you try to use Enter in the command pane of the ISE, you’ll get an error. For example, when I try to break a line after the opening bracket of Where-Object (as in the example above), I get: