megaport-pstools

PowerShell Tools for automation and scripting of Megaport services.

This started life with the purpose of figuring out how one might schedule a bandwidth change on a VXC, but then blew-up into various other tools to simplify other tasks and requests from user, such as exporting and graphing the bandwidth usage, or detecting interface/connection issues.

While the Megaport web UX at https://megaport.al is really great, simple and intuitive, it’s a pain having to click buttons over-and-over – and besides, It ain’t “DevOps-y” cool. There’s always a need for scripted automation with integration with other powershell suites such as the Azure PowerShell Tools.

Why PowerShell? -meh! why not? -Actually, I’ve just been spending a lot of time on Windows lately, running/writing Azure automation scripts, so it was pretty easy to write up a few test scenarios using the API and Invoke-RestMethod. By the time I completed testing 4-5 API endpoints, I was already reusing the majority of the same code, so it pretty much escalated/optimized from there.

Btw – PowerShell works on both Windows, Mac (untested) and Linux, VS Code is pretty and cool too 🙂

In my job, data collection and data deliveries are either done via API or FTP. FTP has it’s drawbacks, but ultimately gets the job done. Some of those issues are more specific to the lack of improvements in the technology, efficiency of the protocol, scaling, availability, file-locking and active vs. passive connections vs. the Firewall security -and so on. Not all are impossible challenges, but unnecessary in today’s cloud orientated technology world.

We still can’t get rid of the ability to transfer exported data and deliver over a file transfer protocol, however, we can change the protocol and the data handler and create a secure method that’s much easier to deal with. While it’s possible to run FTP and SSL/TLS for security, SFTP provides a much less intrusive replacement.

In a past post, I offered a method which allowed users to create a secondary service that was dedicated to SFTP and not use SSH to connect to a console. later versions of openssh don’t support the same method due to shared memory overlap, but – if you really want to create an isolated and dedicated SFTP service, then consider docker instead. In this guide, I’m going to show how to secure your existing SSH for both remote console and SFTP-only.

Step one: SFTP vs SSH (Shell)

I’m going to refer to SSH as “shell access” and SFTP as just “sftp without shell access”. I also use debian, so you’ll just a have to adapt for your own distro.

My /etc/ssh/sshd_config looks like this:

Listen 7387
Listen 22
...
# applies to members of the 'sftponly' group and are
# coming in on port 7387
Match Group sftponly LocalPort 7387
ChrootDirectory %h
X11Forwarding no
AllowTcpForwarding no
ForceCommand internal-sftp -u 000

External firewall port-forwards port 7387 only.
I leave it up to you if you want to use password or keys.

Step two: add sftp as a shell

echo "/usr/lib/openssh/sftp-server" >> /etc/shells

Step three: Create the group and first user

Create the sftponly group; I like mine to be system accounts, but it does not have to be.groupadd --system sftponly

Extending Security

I recommend that users still use firewall ACL’s to limit who can access their SFTP server in the first place, but if you provide a service to anyone-anywhere, then consider basic isolation practices and enhance the security further.

Isolate the host serving sftp service

Run read-only and restricted r/w capability (docker is good for this)

Disable password authentication, use RSA certificates with a password in it.

If you need to use passwords make sure you enforce complex and long passwords! Here’s a neat one-liner for password generationopenssl rand -base64 15

Can you also support a two-factor method? Google-auth is pretty easy to integrate with and there are more options available if you look around. A nice cheap two-factor is to use both an rsa pubkey with a password and the user password.AuthenticationMethods "publickey,password publickey,keyboard-interactive"

So you want to change the ESX pathing select to your datastores from MRU to RR, but the docs say you have to reboot ALL your hosts?

Recently I raised a request for my IaaS provider to change our pathing selection from the default MRU to RR. The tech/helpdesk person did some google searching on topic and got confused by the content, where the VM KB articles said to reboot and other articles were unclear, and so raised a support ticket with VMware and supposedly with HP to clarify. The response I got back from my IaaS provider was that both HP/VMware came back and said must reboot.

So I called bullshit on that, in fact, that’s the cheap/lazy answer.

The way it works is that, you can create a policy to map VMW_SATP_ALUA to VMW_RR_PSP, and it’s going to automatically apply it to any new devices being added – it only affects new storage sure, existing storage won’t change without a) host reboot or b) manually setting the paths on each LUN. A reboot is just the brute-force approach to save clicks.

I fully expected VMware Support to comeback with the “reboot your computer” answer, I got that same answer for many issues over the years since 3.5 (mostly that was about all you could do tbh), I was a bit surprised by HP also stating this given their own HP 3PAR + VMware 6 Best Practice Guide gives both options – Reboot or manually set the paths …

This is how I have always done it since ESX 3.5

Now, if I had 10 or more ESX hosts, then yeah sure, let’s reboot to save clicks!

It’s not to say that rebooting isn’t a valid choice. If you make quite a few changes across a system, a reboot might be needed to weed out quirks. In my case, it was impractical to do that due to various reasons and also unnecessary to perform vmotions across 100 or so VMs over a few days for what could be done in a few minutes.

Following on from my post on how to create your own SSL Certificate Authority, I’ve also started doing this for custom apt repos where we allow public repos over http and private repos over https (+ basic-auth).

To do this, you effectively need 3(+1) things

apt-transport-https package on the client

Install your Root CA Certificate, so you can sign your own certificates and remove certificate errors OR check out letsencrypt.org OR you can buy a valid one from a proper CA and be done with it.

Setup https in the web server.

We use basic-auth over https, so a there’s a fourth step.

configure basic auth in /etc/apt/sources.list.d/custom.list

I won’t cover the details on configuring Apache or creating an SSL Root CA or creating your own repo, I’ll assume you already have that figured out.

So here’s the condensed tasks.

Create take your root CA cert and key

Copy the cert to destination server (that is connecting to your repo). This is usually in /usr/share/ca-certificates/somename/my-root-ca.crt

On the the client, update the CA list dpkg-reconfigure ca-certificates

In a apt sources list file (i prefer to use /etc/sources.list.d/.list), add the repo.deb https://your.reposerver.com/deb stable main or with basic-auth deb https://user:pass@your.reposerver.com/deb stable main

So why use a TV for PC monitor you ask? I fall into the “because I can” category. I upgraded my lounge from a cheap Soniq to an LG 3D Smart TV, and so I had a TV sitting around collecting dust.

Motivation to reuse the TV came while I was mid-DIY office renovating. I figured I’d wall mount my 27″ AOC and decided to get a longer wall mount for “future proofing” should I upgrade the screen. I used the Soniq’s size (which is 46″ in diagonal size, 40″ screen) as the template for position given it’s larger size, once it was on the wall though, I couldn’t resist leaving it to see how looked at the end with all the bench-tops in place.

Once the TV wall-mount was secured into place, all cables neatly aligned and bench-top placed back into position. I hung the screen to the wall plugged it in and took a step back bask in my achievement … I burst out laughing, it was a ridiculous sight to see at first, but I quickly started to geek-out at it.

Anyone who has ever plugged a TV into PC, will tell you there are two main issues.

Overscan.

Poor text quality.

If you want to use your TV as a PC monitor whether it’s for HTPC, Gaming rig or just because cause you can, then depending on the TV, you may not have an obvious way to disable overscan. In my case, my TV is a Soniq 40″ E40Z10A-NZ which falls into the category of non-obvious method as there is no option in the TV menu.

Fortunately after a little digging, I found the factory menu code for my Soniq LED TV.

Press “Source”, then enter 200912

Once in, I was able to select and adjust the Overscan values to Zero and retire GPU based scaling. 🙂

Now if I can just get text to look a lot less shit …

Update: Switching to VGA solved this problem and also supports wake-up. HDMI on the other hand, the monitor goes to sleep but won’t wake up.

You’ll note the -CAcreateserial parameter, this only needs to be defined once – next time you create a certificate change the

-CAcreateserial

to

-CAserial rootCA.srl

Copy your rootCA.crt to a usb stick and plug into your PC’s

In Windows, double click the rootCA.crt and add to the “Trusted Root Certificate Authorities” store. Firefox uses it’s own store, so you’ll have to add it via Options->Advanced->Certificates->Authorities->Import

For linux browsers – most use their own stores, so check the docs, should be in similar places as firefox.

For Mac, I dunno, google it.

EDIT: You could also just use letsencrypt.org, create the certs for apache and then convert to pfx for IIS/Azure