hi Anil, I have a doubt since SHA1 is slowly losing proponents due to its security flaw (& our collectors are more and more equipped with latest security patch :).You can try though, any device in your portal to test?
Please don't tell me here You can open an email ticket to inform me the device and I will test it out.

This might be my last article before leaving my beloved LogicMonitor. It has been a privilege to be part of the Support team but life happens to everyone and I know all things work together for good....
Now the Docker monitoring. We know the standard monitoring described and explained here: https://www.logicmonitor.com/support/monitoring/applications-databases/docker/, which takes advantage of Google cAdvisor, a tool, if I may define it, by Google to monitor Docker containers, elaborated nicely in greater details here: https://github.com/axibase/cadvisor-atsd. This tool is equipped with API which LogicMonitor uses to collect some data about the running Docker. There is a readily available datasource for such.
To make life more exciting as a support team member, about a year ago I received a request from one among ALL our important Clients, by the name of RS (a.k.a Robert), whose request is the following:
We are using cadvisor to provide docker metrics to Logic Monitor. Is there a way to snag a history of the size of each container
that is definitely beyond Support and usually it will go to our excellent Monitoring Engineering team whose responsibility is to develop LogicModules, but I embarked on a challenging journey and within a month, a datasource for that purpose was ready.
This is beside the matter, I am in the midst of gaining knowledge about containers and came across an orchestration system for containers which is boasted to have a 'self-healing' mechanism, Kubernetes. Now I recognize that in the modern cloud infrastructure, self-healing is not uncommon. The indisputable remarkable cloud providers of this generation already have the same kind of feature or products in their offerings, auto/self-scaling, auto/self-replicating, auto-scheduling, etc. This is in-parallel (or even deploying the idea) of the fast-progressing development of Artificial Intelligence in cloud and system infrastructure, a system that has the intelligence to heal itself.
Taking the benefit of Kubernetes that I have installed, researching container metrics in regards with the size began and the answer lies within this statement:
Exactly the same goes with cAdvisor for Docker which has API to monitor itself and that is the very method that LogicModules "Docker_Containers_cAdvisor20" is using to collect data from Docker containers. Everything we need for monitoring is already available and provided by the source...the cAdvisor API.
Some sample metrics presented by cAdvisor API in a web-based display:
We just need to find out how to process those outputs and put the numeric data into datasource.
Fast forward, this is the final product, the monitoring of virtual size of Docker containers (orchestrated by Kubernetes) in my test and please take note of the instance names and wildvalues:
Note: it is also seen (in the 2nd screenshot, a datapoint by the name "virtual_size_sum" and "virtual_size_mbyte".
Now the next question is: how the data numbers come to be?
Firstly, let us take an example of one container highlighted above as an example for the calculation. The instance (container) has a name of "k8s_carts_carts-794f6cc876-z9d6p_sock-shop_7655a496-d347-11e7-90ef-000c29412e95_0"and has wildvalue of:
8afc5e907b809b9a15884b518fdff1678cad54d2562d6d0b63bd3fb5ce77d5a3
If we trace back, that wildvalue can be found in the data collected from cAdvisor API as well, in a text-based result as below, which are the output of running Docker processes:
Note: there are 2 (two) processes for the containers, hence the SUM of the virtual size datapoint ("virtual_size_sum").
When we zoom into only those 2 processes with the same wildvalue of one container, we will see clearer:
It makes the total virtual size as follows:
virtual size sum = 1795608576 + 1560576 = 1797169152 B (= 1797169152/1024/1024 = 1713.9140625 MB)
(note: virtual size PID 5686 = 1560576 , virtual size PID 5810 = 1795608576)
The number is precisely what cAdvisor web-based data earlier shows:
upsize screenshots:
If you see the total virtual size number, that is what is presented by LogicModule collected from the cAdvisor:
The datasource had actually been submitted, but it is not released yet in the community and is still undergoing a review process by the official team, so it may take a while. (Locator Code:JAGW6D). Therefore there is no way to download it from LM Exchange yet.
If you wish to have it, you may contact the excellent Support Team and they will search my past case to get hold of the datasource. Alternatively, you may drop an email in my personal address: purnadi.k@gmail.com as I will sadly be leaving soon.
Life is too short, so live your lives to the fullest and may we meet again, one way or another. Have a fun day! Cheers.

IPv6 which used to be a behemoth and now is a norm, is still a thing that is widely-unsupported as a standard, although more and more are joining the pool and making their infrastructure IPv6- friendly. IPv4 addresses are scarce, according to the 'authorities' (i.e. APNIC, RIPE), but whether it will be completely replaced soon by IPv6, remains to be seen (https://mightytechknow.wordpress.com/2016/08/01/ipv6-internet-routes/). As of now, both public routes are still growing with IPv4 stands at ~670,000 routes and IPv6 at ~45,000 (15K growth since Aug 2016).
For LogicMonitor, I can be certain that Internal Web Service and Internal Ping Check are compliant with IPv6, which also means our collectors are proudly capable to deal with IPv6. As for External Service Check (Web,Ping) is beyond my position to tell if it is supported. Nonetheless, I have a personal strong confidence that pertaining to SiteMonitors, IPv6 is already integrated, but to fully support our precious Clients to add Web or Ping Service check, it will be fully dependent on whether those SiteMonitors are connected to IPv6 backbone which I foresee it will be in the near future (I earnestly hope so).
There are questions from LogicMonitor community about IPv6 as well that I assume it will be more common as time progresses:
Following is a little showcase of Internal Web and Ping Check using private IPv6 addresses (of course):
'Web Check'
'Ping Check'
and as you can see that 'all things are possible to him who believes'
Note:
Here is the IPv6 website:
The ping and web access test from the server (collector):

--- "To him who is able to keep you from stumbling" ----
Monitoring a tunnel to be notified if it is stumbling is what this topic is all about.
Last year there was a request by one of our financial customer, a private investment bank, that uses Palo Alto Networks firewall to monitor the VPN tunnels on PAN device and although there are wide range of monitoring set with our current LogicModules, this specific feature is yet to be developed, so here comes a customized monitoring, leveraging on existing datasources. I will, again, refrain mentioning customer's name as I have no permission to use, but initial would suffice ("MB").
I had a privilege to deploy PAN device (PA-5000 series) long before layer-7 firewall becomes popular, when PAN stole the head-start in the industry. It is dubbed to be one of the sophisticated firewalls in the market even until now with its heartbreaking price (in my opinion) but it delivers its promise though. They even have virtual appliance which is quite efficient and a real software-driven firewall although selling the box is definitely a more money-making business just short years ago while nowadays hardware business with high-cost brand or trademark may start losing grounds and thanks to software-driven 'everything' (network,storage,data centre,etc).
Nothing fancy about it but it was meticulously established based on a research of Palo Alto Networks API. It is not official datasource in LogicMonitor repository yet but it is in LMExchange community grade datasource (you can download with LogicModule Locator: 2KLFET).
Here is a working sample applied on a PA-800 series device with 3 tunnels connected to remote PAN devices:
Following is the Active Discovery script to discover the active tunnels on the device :
import com.santaba.agent.groovyapi.http.*;
apikey = hostProps.get("paloalto.apikey.pass")
host = hostProps.get("system.hostname")
command = java.net.URLEncoder.encode("<show><running><tunnel><flow><all></all></flow></tunnel></running></show>");
url = "https://${host}/api/?type=op&key=${apikey}&cmd=${command}";
response_xml = HTTP.body(url);
response_obj = new XmlSlurper().parseText(response_xml);
response_obj.result.IPSec.entry.each()
{
name = it.name.text()
tunnelid = it.id.text();
innerIf = it."inner-if".text();
localip = it."localip".text();
peerip = it."peerip".text();
println "${tunnelid}##${name}##local:${localip} - peer:${peerip}";
}
return 0;
and afterward Collection script to get the data needed:
import com.santaba.agent.groovyapi.http.*;
apikey = hostProps.get("paloalto.apikey.pass")
host = hostProps.get("system.hostname")
wildvalue = '##WILDVALUE##';
command = java.net.URLEncoder.encode("<show><running><tunnel><flow><tunnel-id>" + wildvalue + "</tunnel-id></flow></tunnel></running></show>")
url = "https://${host}/api/?type=op&key=${apikey}&cmd=${command}"
response_xml = HTTP.body(url);
response_obj = new XmlSlurper().parseText(response_xml);
response_obj.result.IPSec.entry.each()
{
entry ->
entry.children().each()
{
node ->
println node.name() + ": " + node.text();
node.children().each()
{
list ->
println list.name() + ": " + list.text();
list.children().each()
{
instance ->
println instance.name() + ":" + instance.text();
}
}
}
}
return 0;
The original request was only to get the tunnel status (datapoint: tun_status) hence the graph is only one, but I expanded for other metrics available from the API.
One can always add graphs for other datapoints as they deem fit.

I never believe the 'virtual' dispute between *nix lovers and UI lovers that has been going on since NT days (with its PDC,BDC concept) and during the days when virtualization was still a highly-guarded confidential technology by the big boys of Unix era (LPAR, LDOM,VPAR) which was boasted to be able to do hardware virtualization and of course with its highly non-sensible price of license as well. IBM was the champion of all with its RISC and Mainframe era, followed by SPARC of the 'sold-off' Sun Microsystem, developed heavily by Fujitsu. Sadly for the big boys, that too-far-fetched technology has become a commodity, common for wide consumers, nowadays. Thanks to virtualization technology running on x86 that makes the seasoned sysadmin and newbie alike to be able to play with used-to-be 'high-level' certified Unix administrator. Exclusivism has become a commonality.
Back to the dispute...one group say GUI-based system administration is the best tool of all while the other camp, with its exclusivity and probably fear-driven feeling of losing the identity, say the terminal-based administration has no rival. I beg to differ though, and must disagree with both. The best of both worlds is the best
Therefore there are times text-based system administration is very useful and efficient but on the other hand, throwing terminal for the sake of few clicks on a UI is what a sysadmin should do to enjoy more life.
With that, I would say that introducing PowerShell since some time ago is the right move by Microsoft and it is about time for them to bring up the level of competition.
So much about the past, I actually just would like to inform that in the recent past, our 'creative' Client of a loyal LogicMonitor's customer (whom I should refrain to mention the name since no permission is given to me by the person but quoting his initial 'JJ' will be sufficient I believe, attempted to use PowerShell to do name server lookup. Being a *nix guy for sometime, I was wondering why in the world people do not want to just use nslookup or dig? I hope my client would not get offended and he should not since I was about to give a compliment.
Through the simple and raw development of a datasource, I learnt once again that a new thing is not a bad thing and in fact, it brings about a new way of creative thinking and opportunities. Although our Support team, by standard operating procedure, is never responsible to develop datasource and such capability is even beyond the capacity of Support team, but in this exceptional case, the Support team has tried to put a very raw development, however it is not to be taken as official datasource nor a complete and efficient development, but merely for a proof of concept or simple achievement of a purpose.
Improvement has been made based on the original script submitted by our Client as follows:
$DomainControllers = [System.DirectoryServices.ActiveDirectory.Forest]::GetCurrentForest() | Select-Object -ExpandProperty GlobalCatalogs | Select-Object -Property @{Name="DC";Expression={$_.Name.split('.')[0].ToUpper()}}, Name, IPAddress, SiteName
$FastenalDomain = "fastenal.com"
$Server = "10.0.0.1"
$Prime = (Measure-Command {Resolve-DnsName -Name $FastenalDomain -Server $Server -ErrorAction SilentlyContinue}).Milliseconds
ForEach ($DomainController in $DomainControllers) {
try {
$DNSResponseTime = (Measure-Command {Resolve-DnsName -Name $FastenalDomain -Server $DomainController.IPAddress -ErrorAction Stop}).Milliseconds
}
catch {
$DNSResponseTime = $false
}
Write-Host "$($DomainController.DC)=$DNSResponseTime"
}
So basically what to be achieved is to measure the latency of resolving an [external|internal] domain using several DNS servers (in this case is internal DNS in a customer's domain). The script looked quite complex at first and as we might have known that datasource may need two set of scripts for Active Discovery as well as data Collection, so after some tests and reworks beyond support hours ...and I need you to imagine the time spent for non-programmers to develop such datasource, is beyond comprehension, but the anti-climax is: it is actually very simple that possibly will take 5 minutes time of a real scripter. Anyway, it is still an achievement nonetheless, so here it is the final product:
Active Discovery (this is querying DNS servers exist in the local domain):
$DomainControllers = [System.DirectoryServices.ActiveDirectory.Forest]::GetCurrentForest() | Select-Object -ExpandProperty GlobalCatalogs | Select-Object -Property @{Name="DC";Expression={$_.Name.split('.')[0].ToUpper()}}, Name, IPAddress, SiteNameForEach ($DomainController in $DomainControllers) {
$nsipaddress = $DomainController.IPAddress
Write-Host "$($nsipaddress)##$($DomainController.DC)"
}
return 0
Collection:
$responseTime = (Measure-Command {resolve-dnsname -name fastenal.com -server ##WILDVALUE## -erroraction stop}).milliseconds
Write-Host "$responseTime"
Yes, of course, it is used in Windows Servers environment (I do not even know if *nix has domain controller concept?).....
Cheers.

Hi, for sure, any 'internal' monitoring, if I may use such term (or between collector to the device), is IPv6 friendly as of now. But that is only within the limit of collector's reachability to devices. When it comes to public-facing monitoring, such as External Service Check, it is yet to be supported (until our SiteMonitors connect to IPv6 backbone, I believe).
For internal Service Check, I did a simple test to see if the v6 address is accepted and truly it is:

IPv6 is only supported for INTERNAL Service check as of now.
For external Service Check, well, I assume it will depend on whether SiteMonitors are connected to IPv6 providers or tunnel, and neither one is yet available at the current moment. Until then let us keep watching if IPv6 addresses will be in this list in the near future: https://www.logicmonitor.com/support/services/about-services/what-are-services/#What-locations-are-services-tested-from

Info: Groovy script is the base for the environment and certainly the target/end device must be a linux-based, be it a distro of Linux server/desktop or an appliance with linux-kernel firmware (err...it's basically almost every device then: firewalls, switches, IDS/IPS, LB, Apple devices, Android devices, etc....it seems all gadgets on the planet using linux kernel)
It is a request some years ago (seems so long though, which actually is +2 years) by a LogicMonitor's Customer:
Obviously, this is not an official datasource crafted by the amazing Monitoring Engineering team of LogicMonitor, but patting my own back, it suffices to say that it serves the purpose for a better security connecting remotely which has been a common best-practice by anyone who enjoys text-based command line remote session.
SSH Key is used over the option of sending password due to the apparent reason of information security, although one might argue that the ssh password will be only between a LogicMonitor collector and the endpoint, within a supposed-to-be internal network. Yet a security best practice may dictate such usage regardless of the network.
Before progressing further, however, there is a catch in using ssh key, which is the necessity for a deployment of public key to each target device. Simply put, every time SSH Keys are used for remote session between two devices, there will be private key and public key used for authentication process, hence no password needed. These keys are to be put in the devices, private key in the source device where the remote session is originated and public key in the endpoint. Private key is not for a share whilst public key is for any device that the source will need to connect, should the ssh key be used. The only hassle, even if it is considered to be one, is to load that public key on each target device (if there are many). From the security standpoint, that is totally not an issue and rather a compulsory instead. (As a comparison, by using ssh user and password, the process would be similar too, that is to create user and password in each target device). This practice is really not an ancient stuff and almost every cloud provider, AWS being the one, has that feature recommended (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html).
In LogicMonitor case where I did the test, it is between a collector server/device and a monitored device (both happen to be Ubuntu 16 although Windows can be used for the collector as well and on the other hand, definitely it can not be used for Windows target device, for the obvious reason). For my simple test, it is just to monitor a log file size which is a syslog. One thing worth noting is, remote session type of monitoring will certainly consume more resource of collector, as a matter of fact, every datasource using script will do. Besides, using this method, the processing time seems to increase by a little bit, compared with user/password, but I have not done any thorough observations though (not only that I do not intend to, since this is just a test, nor have I the environment huge enough to do a high load-test). Security and processing speed, they do not go in parallel for sure, especially considering the recent havoc by a processor company caused a nightmare for information security worldwide, bypassing security measure for the sake of increasing a speed of data processing.
So here is the script which is basically running a command to throw output of a data from a file named 'dusyslog' in the remote device and a datapoint will capture it (datapoint name: size):
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.util.Properties;
import com.jcraft.jsch.Channel;
import com.jcraft.jsch.ChannelExec;
import com.jcraft.jsch.JSch;
import com.jcraft.jsch.Session;
try{
String command = "cat dusyslog";
String host = hostProps.get("system.hostname");
String user = hostProps.get("ssh.user");
String key = hostProps.get("ssh.key");
JSch jsch = new JSch();
jsch.addIdentity(key);
jsch.setConfig("StrictHostKeyChecking", "no");
Session session=jsch.getSession(user, host, 22);
session.connect();
Channel channel = session.openChannel("exec");
channel.setCommand(command);
channel.setErrStream(System.err);
channel.connect();
InputStream input = channel.getInputStream();
channel.connect();
try{
InputStreamReader inputReader = new InputStreamReader(input);
BufferedReader bufferedReader = new BufferedReader(inputReader);
String line = null;
while((line = bufferedReader.readLine()) != null){
println(line);
}
bufferedReader.close();
inputReader.close();
}catch(IOException err){
err.printStackTrace();
}
channel.disconnect();
session.disconnect();
}catch(Exception err){
err.printStackTrace();
}
The first thing you can notice is I am using :
jsch.addIdentity(key)
for adding the key file into the session for authentication.
So what is this file? it is the private key file residing in a secure place in collector server. You need to make sure the file 'dwelleth in the secret place of the most High and shall abide under the shadow of the Almighty'.I really mean it that the private key should not be exposed to the world. But of course, I make it sound like a very serious security matter So just to make sure that file is given limited permission for collector service to access is sufficient. Undoubtedly the script is not built by myself from scratch but I have made some modification so it is safe to be copyrighted by me and you have the 'the right to copy' & enhance it if you need to and providing me a treat of coffee would be highly appreciated.
Further to that, this part:
key = hostProps.get("ssh.key");
as per normal, is defined at the device property and following is the sample from my test:
Linux device:
/security/key
Windows device:
C:\\security\\key
Note: you can add an additional security to disguise the location of the file too and that folder "security" is not the original folder where the private key resides. This is for paranoid security practitioners. (But as I usually joke with friends in the IT field, the best data security was during WordStar era, before Windows 3.11 came and before tcp/ip was introduced to home users ).
Below are some screenshots from the implementation:

hi Martin, I happened to come across your inquiry. Just want to let you know that you can use ssh key to log into your device from collector. I have tried it and it worked well. But you need to of course copy the public key into each target device. Usual drill as you might have known.

Being a newbie myself in the area of PowerShell, this is just a simple example or even 'raw' codes of PS for LogicMonitor REST API which hopefully it will be of little benefit to whomever needs it.
I believe one of many advantages using LogicMonitor is the freedom that the other products in the market may not give and that is to deploy creativities in the existing monitoring. The example provided here is pertaining to REST API where LogicMonitor has a bunch of publication on our help page:
and several code samples are inclusive in the documents. I have seen many of our great Clients have used them and even made it to next level where even our Support team (or probably limited to only myself) was 'brought to our knees' , and must admit the very advanced level these precious Clients have. What an excellent people they are!
Recently we had one nice client who asked for code sample to retrieve device in the portal, and not only few but ALL of them. We do have a sample code, kind courtesy of our Product Team (Ms. Sarah Terry):
If the request is to use Python, that sample would have been sufficient, nonetheless, or I should say 'unfortunately', it requires PowerShell. As we all know that our Support Team's standard response would be to suggest our Clients making in-house development if such requirements arise or wait for our Product/Development to release a fresh sample in the abovementioned help page.
But on this occasion, using the 'cheat' sheet in the other samples, I tested the following code which gives me a satisfactory result (for a newbie certainly):
<# account info #>
$accessId = '.....[API access id].....'
$accessKey = '....[API access key].....'
$company = '.....[LogicMonitor portal/account]....'
$allDevices = @()
<# loop #>
$count = 0
$done = 0
While ($done -eq 0){
<# request details #>
$httpVerb = 'GET'
$resourcePath = '/device/devices'
$queryParams ='?offset=' + $count + '&size=3'+'&fields=id,displayName'
<# Construct URL #>
$url = 'https://' + $company + '.logicmonitor.com/santaba/rest' + $resourcePath + $queryParams
<# Get current time in milliseconds #>
$epoch = [Math]::Round((New-TimeSpan -start (Get-Date -Date "1/1/1970") -end (Get-Date).ToUniversalTime()).TotalMilliseconds)
<# Concatenate Request Details #>
$requestVars = $httpVerb + $epoch + $resourcePath
<# Construct Signature #>
$hmac = New-Object System.Security.Cryptography.HMACSHA256
$hmac.Key = [Text.Encoding]::UTF8.GetBytes($accessKey)
$signatureBytes = $hmac.ComputeHash([Text.Encoding]::UTF8.GetBytes($requestVars))
$signatureHex = [System.BitConverter]::ToString($signatureBytes) -replace '-'
$signature = [System.Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($signatureHex.ToLower()))
<# Construct Headers #>
$auth = 'LMv1 ' + $accessId + ':' + $signature + ':' + $epoch
$headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]"
$headers.Add("Authorization",$auth)
$headers.Add("Content-Type",'application/json')
<# Make Request #>
$response = Invoke-RestMethod -Uri $url -Method Get -Header $headers
$total = $response.data.total
$devId = $response.data.items.id
$numDevices = $devId.Length
$count += $numDevices
$items = $response.data.items
$allDevices += $items
if ($count -eq $total) {
$done = 1
}
} #end loop
Write-Host ($allDevices| ConvertTo-Json -Depth 5)
So, hopefully, somebody can sanitize the codes for the better for their own usage.
The important highlights are:
that will need to be changed accordingly, and:
that depends on what fields to be displayed in the output, how many number of devices to be retrieved at one time (size=3).
Note: using dummy devices in our lab portal, 11 devices can be successfully retrieved by iterating 3 at a time for 4xAPI requests. I never tested with above 1000 devices as the document mentioned to be the limit per request (although it would be nice if any of our Clients is willing to test. If only Google be our Client, great to test their two freshly-built datacenters at the far-west of Singapore with 'must-be' more than thousands of devices) :

Note: As of the publication of this article, collector 25.000 is a GD (optional general release), which means this article will be obsolete as version goes forward.
In the past 2-3 months I had two cases whereby error occurred when an Internal Service Check of a website is authenticated with NTLM using ADFS. That error seemed odd with a message of:
or in the detailed response, it can be seen as:
<title>401 - Unauthorized: Access is denied due to invalid credentials.</title>
regardless whether the credentials (username,password) set in the Service Check configuration are correct.
Based on the design by Product & Development team, previous collector version (before 24.300), the error is "normal" due to the fact that the URL of the request origin is different from authentication URL, which in this case is ADFS URL and the collector does not pass the credentials to the authentication server which makes the process fails.
Fortunately with the arrival of version 25.000, this all has been changed so redirected authentication will be supported as explained in this document:
(see "General Deployment Collector - 25.0")
It is evident with my little test that you may also see in the screenvideo below:
website to check: http://admin.lmglobalsupport.com (redirected to http://pk.lmsupportteam.com)
ADFS authentication: https://fspk.lmsupporteam.info
The following is additional screenshots of the location in IIS (which I used for my test) to configure the HTTP redirection:
Here is just a preview about website authentication in a browser:

As we all know, LogicMonitor has a monitoring to check if web server is alive or responding in a device. It uses datasource called HTTP(S) with a nice short note here:
https://www.logicmonitor.com/support/datasources/data-collection-methods/webpage-httphttps-data-collection/
However, there is a catch :). and explanation is on the way....
I am running an NGINX (with SSL). This datasource is supposed to correctly monitor the secured web server but it is not and there is an error code 5 and based on the note in datasource it means network error. What kind of error is that? It will demand a closer look.
(
Firstly, such error usually will lead to some checks if the web server is running fine and with LogicMonitor Collector debug, there is a tool to do just that:
!http
In the debug window, it shows that the web server has no issue.
being paranoid .. or in fact, a standard procedure for any devops/sysadmin(?), checking the web server service rightfully to be done as well and it is totally up and running.
Further to that, querying the web server from the collector server shell is another evidence that it is all well with the web server.
Checking from collector debug?
checking the web server?
query from collector shell to the web server?
This was very confusing at first until a more detailed investigation was done.
HTTP protocol version is the centre of this blunder.
The datasource is sending a request with HTTP version 1.0, while the server does not accept it and it throws error every time. The web server can be configured to accept both the older protocol 1.0 (which is still widely used) and 1.1, nevertheless, some of them only accept 1.1
Collector debug command !http in LogicMonitor, does not specifically request with 1.0, therefore there is no error when such debug is run. The result shows as evidence:
It also leads to the finding that the web server return the query in protocol version 1.1
If the query specifies which protocol to use such as in datasource HTTPS, the web server will cough up that error:
Server2000:~$ curl -k -I --http1.0 https://test-server_nginx
curl: (52) Empty reply from server
and when the request is specified with 1.1, the result is just what it should be:
Server2000:~$ curl -k -I --http1.1 https://test_server_nginx
HTTP/1.1 200 OK
Server: nginx/1.10.0 (Ubuntu)
Date: Sun, 13 Aug 2017 09:59:35 GMT
Content-Type: text/html
Content-Length: 6031
Last-Modified: Wed, 10 May 2017 11:51:16 GMT
Connection: keep-alive
ETag: "5912feb4-178f"
Strict-Transport-Security: max-age=63072000; includeSubdomains
X-Frame-Options: DENY
X-Content-Type-Options: nosniff
Accept-Ranges: bytes
Therefore the datasource needs to be modified to deal with web servers that do not like the query with the earlier version of 1.0, with the following on the request part:
GET / HTTP/1.1 \nUser-Agent:LogicMonitor\nConnection: close\n\n
hence the monitoring will be corrected:
As a note, there is a configuration in NGINX whereby it will strict the web server to only respond to 1.1 protocol and reject 1.0 which is the following:
if ($server_protocol ~* "HTTP/1.0") {
return 444;
}
"444" is the code that will be provided when the query is using 1.0, which means:
444 No ResponseUsed
to indicate that the server has returned no information to the client and closed the connection.495 SSL Certificate Error
The error in LogicMonitor will correspond to that error configuration in the web server and in this case when protocol 1.0 is used, it returns code 5.
For IIS, the restriction will be in web.config as follows, which is using URL Rewrite module in IIS:
<rules>
<rule name="rulename1" patternSyntax="Wildcard" stopProcessing="true">
<match url="*" />
<conditions>
<add input="{SERVER_PROTOCOL}" pattern="HTTP/1.0" />
</conditions>
<action type="AbortRequest" />
</rule>
</rules>
</rewrite>
</system.webServer>

Monitoring Tomcat Context with HTTP Data Collection
Tomcat is not a common home pet that people normally know. It is not a type of a cat ("a male cat") after all but an eccentric name like any other open source software's names i.e. Guacamole, Apache, RabbitMQ, etc. which I believe represents the freedom nature of those software, where creativity is the main ingredient.
I am not here to explain about Tomcat per se, either because of Google provides abundant information about it or it is out of the scope of my article. Simply put, Tomcat is a servlet whereby multiple Context containers can exist. Each context refers to a web application.
Recently we have quite a few customers request to monitor the status and response time of the Tomcat Context, which is actually simple since there are already readily available HTTP Datasource or Internal Web Service Check for that purpose. However, our customers have, in this case, multiple contexts as in normal Tomcat application.
Therefore Active Discovery (AD) is needed to get the list of Contexts running on Tomcat before the simple HTTP Data collection, that has only two Datapoints (Response Time & Status) will be applied. For Active Discovery part, I would need to give credit to our renowned David Lee whom you might be familiar if you ever open a ticket with LogicMonitor Support channel.
For the sake of confidentiality, all the screenshots will be in my own lab instead of our client, it is, however, producing the similar intended result although there are more Context containers in the real production environment.
The AD is a short groovy script utilizing Java Management Extension to connect to remote Tomcat from Collector. Discovery Method chosen should be 'SCRIPT' in this Datasource.
import com.santaba.agent.groovyapi.http.*;
import com.santaba.agent.groovyapi.jmx.*;
import javax.management.remote.JMXServiceURL
import javax.management.remote.JMXConnectorFactory
import org.jboss.remotingjmx.*
def jmx_host = hostProps.get('system.hostname');
def jmx_port = hostProps.get('tomcat.jmxports');
def jmx_url = "service:jmx:rmi:///jndi/rmi://" + jmx_host + ":" + jmx_port + "/jmxrmi";
context_array = jmx_conn.getChildren("Catalina:type=Manager,host=localhost,context=*");
context_array.each
{ context ->
println context+"##"+context
}
return 0;
(note: tomcat.jmxports is the port used by JMX to connect to Tomcat servlet and in this case is a standard port 9000)
Following is the result of AD:
(note: one of the context name is: 'context-test')
which can be tested from the collector debug window as follow:
$ !jmx port=9000 h=172.6.5.12 "Catalina:type=Manager,host=localhost,context=*"
Catalina:type=Manager,host=localhost,context=* =>
/examples
/manager
/docs
/context-test
/host-manager
/
As for the data collection, the mechanism that is used to collect data is 'WEBPAGE'
(Note: Port number will depend on the setting in Tomcat and the wildvalue will be the Context name)
Data collection can be tested in the collector with the command !http:
HTTP response received at at: 2017-04-20 09:05:30.85. Time elapsed: 3ms
HTTP/1.1 200
Accept-Ranges: bytes
ETag: W/"214-1492696731000"
Last-Modified: Thu, 20 Apr 2017 13:58:51 GMT
Content-Type: text/html
Content-Length: 214
Date: Thu, 20 Apr 2017 14:05:30 GMT
<html>
<body>
<h3>TEST Tomcat Context web access</h3>
<pre>
<>
[
{
status: "OK",
context: "context-test"
},
{
company:"LogicMonitor",
}
]
</>
</pre>
</body>
</html>
Here is the final result of the monitoring:
From the browser, the Tomcat context can be accessed just like a normal website:
The Datasource is only available for download from LM Exchange (version 1.1). This is not available in core repository of LogicMonitor Datasource.
Note:
This is what Tomcat Manager looks like: