We recently ran into an issue where we should not get a Meraki Security Appliance (MX) to integrate with Microsoft’s Active Directory. The Meraki dashboard was not particularly helpful in identifying why the connection was not working. The Event log just kept repeating the following error:

We did identify that the Username, Password, and IP were correct and that the MX could ping the Domain Controller.

Our next step was to perform a packet capture of the traffic between the MX and the Domain Controller. In the output of the .pcap file, you can see a client hello packet that’s trying to negotiate with the server trying to use 65 different supported cipher suites. The Server is responding with an immediate RESET response, which normally indicates that these suites are not supported.

Our Domain Controllers were Server 2012 R2 systems, during our search for a solution to the issue, We came across this KB: KB2919355

It’s important to note that this KB is actually a collection of 6 files that all need to be run, but in a specific order. During the running of the file for 2919355, we ran into another seperate issue where that file would not install. That problem was solved by running the following two hotfixes: Hotfix 2939087 and Hotfix 2975061. We then were able to install 2919355, as well as the remaining updated in the first KB article. Post reboot we were able to see the Meraki Dashboard report that it was now able to communicate with the Domain Controllers.

We ran another packet capture, and the output of the .pcap is displayed below. You can see that the Client Hello is now met with a Server Hello response packet instead of a RESET. If we dig down and view the cipher suite of the response we see that AES 256 SHA384 is being used, which apparently was not supported on Server 2012R2 before the above KB was installed.

Attached Files:

I’ve recently started placing Ubuntu web servers up on AWS. These are pretty small systems, that don’t utilize amazon’s Database or Elastic Load Balancer features, they’re just stand alone all-in-one systems, and are relatively small.
I wanted a way to protect these systems in case amazon ever had an event where a region was down, or unstable, which occasionally does happen. If this were a larger deployment we’d have some sort of real-time database replication between availability zones, and an Elastic Load Balancer that would allow us to seamlessly fail over. In my case, I just want the comfort of knowing there is a copy of the volume in another region, and I want it to happen automatically.

I modified it slightly to provide more verbose logging, and I added a section to both the “snapshot_volumes” and “cleanup_snapshots” functions. I also modified the IAM Security Policy to allow for copying snapshots. We’ll get into all of this in a bit, but before we start FAIR WARNING: I’m not a developer, and you use this script at your own peril. It creates snapshots & copies data (which both have costs associated with them) and deletes snapshots. There are lots of things that could go wrong if you do not take the time to understand what you are doing with this script.

First things first, let’s crate the IAM Security Policy

Creating IAM Security Policy

From the main AWS menu select “Identity & Access Management”.

Click “Policies” in the left hand pane

Click “Get Started”

Click “Create Policy”

Click “Select” next to “Create Your Own Policy”

Enter the following:

Policy Name: manage-snapshots

Description: Allow Servers to create and manage snapshots of themselves

Download and configure script.

The User’s home directory will hold the AWS CLI configuration files that directory needs to be set within the script

1

2

# Set the HOME variable to that of whichever user configured AWS CLI

export HOME=/home/<username>

it’s hard set to wait 10 minutes between when it starts a snapshot, and when it attempts to copy that snapshot to a new region. If your snapshots are huge, this may need to be adjusted.

1

2

3

#Copy to US East 1

log"Pausing for 10 Minutes to allow snapshot to complete"

sleep10m

It’s configured to delete any snapshot older than the retention period, which is currently 7 days, if you want a longer retention period, this should be adjusted

1

2

# How many days do you wish to retain backups for? Default: 7 days

retention_days="7"

The zone that we’re replicating the snapshots to is hard set as us-east-1, this will need adjustment if you want snapshots copied elsewhere. It also uses the description component of the remote snapshot to hold the name of the original snapshot, this is important, as when the original is deleted, that original snapshot id is used to query the remote region for snapshots whose descriptions match, and delete those as well.

Assuming you are using Dell Servers, you might be interested to know that you can install and use the Dell Open Manage Server Administrator application on your ESXi hosts and manage their hardware in nearly the same way you can use it on your windows servers’ hardware. First, OpenManage needs to be downloaded, you can find it here: OpenManage. Make sure to download both the windows version (for the system that will be managing the ESXi host) as well as the version that matches your ESXi version.

Next, we need to move the vib over to the host. The way i do this is with a tool call WinSCP which can be found here

Enabling SSH on ESXi host

Connect to the host with the vSphere Client

Select the Host, and then click the “Configuration” tab

Click “Security Profile” from the bottom right-hand box

Click “Properties…” in the row titled “Services”

Highlight “SSH” and then click “Options…”

Click “Start”

Click “OK”

Moving file to host with WinSCP

Open WinSCP and enter the following:

File Protocol: SCP

Hostname: <IP Address of Host>

Username: root

Password: <root password>

When prompted click “Yes” to accept the private key of the server

In the right hand pane find the folder called /tmp and double click on it.

In the left hand pane, locate the .vib on your PC and then copy it into the /tmp folder.

Installing OpenManage

place the host in maintenance mode

SSH into the host using putty

Enter the following command, making adjustments to the file name to match that of your vib:

I recently ran into an issue were a capture wim image created from a windows 8.1 (x64) and Server 2012 R2 install.wim imaged repeatedly failed on boot with the error:

Windows failed to start. A recent hardware or software change might be the cause. to fix the problem:

Insert your windows installation disc and restart your computer.

Choose your language settings, and then click “Next.”

Click “Repair your computer.”

if you do not have this disc, contact your system administrator or computer manufacturer for assistance.

File: \Windows\System32\boot\winload.exe

Status: 0xc000000f

Info: The application or operating system couldn’t be loaded because a required filed is missing or contains errors.

I was able to solve this issue be mounting the wim with imagex, changing nothing, and then unmounting the wim using the /commit argument.

Follow these steps (assuming your file is located c:\capture.wim and your mount directory is c:\mount)

1

2

imagex/mountrwc:\capture.wim1c:\mount

imagex/unmount/commitc:\mount

Once the image was committed, I opened the WDS console, selected the current Capture image, and selected “Replace Image…”. I then pointed to the c:\capture.wim file previously edited.

I then rebooted the client, and tried the capture image again, this time it worked without issue. I’m not sure what mounting and unmounting the image did, but i suspect perhaps it validates or changes certain files during the mounting and unmounting that are required for the image to be bootable.

Recently I had a need for my AWS instances to dynamically update CNAME records each time they started. You’ll only get a dedicated IP if you purchase an Elastic IP, and then, only 5 per account unless you reach out to Amazon for more. Knowing that I’m both cheap and lazy, I wanted something that would be free, as well as, automatic. I found quite a few blogs and articles that were a big help, but no one ‘put it all together’ for me. After about 6 hours I’ve got a fully working solution, but please feel free to comment on where it can be improved.

This article makes the following assumptions: Ubuntu 14.04 LTS is being used as instance OS, External DNS domain is public, and hosted on Route 53.

I’m neither an AWS nor Linux daily user, so if you see something that could be improved, please do let me know.

Create AWS User, Group, and Policy for Dynamic DNS

From the main AWS menu select “Route 53”

Click “Hosted Zones” in the left hand column

Click “Create Hosted Zone”

Enter the Domain Name that will be updated by Servers, this can be a subdomain if desired.

Type: Public Hosted Zone

Click “Create”

Once created, note the zone ID for later.

From the main AWS menu select “Identity & Access Management”.

Click “Policies” in the left hand pane

Click “Get Started”

Click “Create Policy”

Click “Select” next to “Create Your Own Policy”

Enter the following:

Policy Name: change-dns-records

Description: Allow Servers to update their own CNAME Records each time they reboot.

Policy Document:

change-dns-records

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

{

"Version":"2012-10-17",

"Statement":[

{

"Action":[

"route53:ChangeResourceRecordSets",

"route53:GetHostedZone",

"route53:ListResourceRecordSets"

],

"Effect":"Allow",

"Resource":[

"arn:aws:route53:::hostedzone/<Zone ID>"

]

},

{

"Action":[

"route53:ListHostedZones"

],

"Effect":"Allow",

"Resource":[

"*"

]

}

]

}

NOTE:Replace <Zone ID> with the zone ID of the DNS zone the server needs to update.

NOTE: Replace <dns-editor’s access key ID> and <dns-editor’s secret access key> with appropriate values from the dns-editor user. Update YourDomain.com to match either your top level, or a subdomain of one of your domains.

Next, create a file called /usr/sbin/update-route53-dns.sh Enter the following into the file:

NOTE: replace [Client_URL_ShortName] in the above text with whatever you want to CNAME to be, I use the hostname of the server, but you could use anything (www. testing. mail. ) etc.

NOTE: it should not be necessary to have to delete the record and then re-create it, the –replace flag should be able to do that in a single command, however I could not get it to work in cli53 build 6.5.0, which is what was used here. I had to delete the existing CNAME and then re-create it. I also noticed that it is case sensitive, and always created as lower case, so in your delete command you need to make sure you are specifying the record to delete in all lowercase.

NOTE: in some ami distributions ec2metadata needs to be replaced with ec2-metadata

Lastly we need to add the script to the logon scripts that run during boot, enter the following commands:

If you are like me you occasionally need to setup a single AP into a site either too small for a controller, or unwilling to pay the extra costs associated with one. Here are the steps required to change to Autonomous mode, as I believe that all of the x702i series are shipping in lightweight mode by default.

Log into www.cisco.com

Click “Support” at the top

Click the “Downloads” tab

Select the “Wireless” from the left hand pane”

Select “Access Points”

Select “Cisco 1700 Series Access Points”

Select “Cisco Aironet 1702i Access Points”

Click “Autonomous AP IOS Software”

Ideally, you are looking for the highest number firmware revision that’s marked as MD, or GD. In some cases you’ll only see ED revisions, downloaded the highest revision number. Click the “Download” button, and agree to the terms of service.

Connect a network cable from your PC to the AP.

Start a TFTP server on your computer, and set your interface to 10.0.0.1 255.255.255.0.

Open a Serial connection to the AP, after it finishes booting log in. [Default Password:Cisco ]

Enter the following commands, pressing enter after each line:

enable

debug capwap console cli

debug capwap client no-reload

capwap ap ip address 10.0.0.2 255.255.255.0

capwap ap ip default-gateway 10.0.0.1

Archive download-sw /force /overwrite tftp://10.0.0.1/%File Name%.tar

The AP will reboot automatically. After its finished the reboot, log back in and issue the following command:

show version

Verify the AP is now running the updated image, and that you have access to the full suite of commands.

NOTE: you’ll notice that you keep getting a capwap error while the AP is in lightweight mode, if you are having trouble entering these commands because of it, put them all into a notepade file, wait for the error to appear, and then quickly paste them all in at once.

Lastly just save your config and test, you should now be able to connect to the chromecast from a wireless client connected to the same WLAN as the chromecast, or if you followed the previous post on configuring a sonicwall for use with a chromecast (located here), a wired client connected on another interface of your sonicwall.

Select the radio button titled: Enable reception of all multicast addresses

Click Accept at the top

Now click Network from the left hand drop down, and when the menu expands click Zones

For each Zone that will be participating with chromecast, click the configure icon, Check the box titled Allow Interface Trust if it’s not already selected. Click OK

From the Network menu on the left click Interfaces. For each interface that’s part of any zone configured in step 8 perform the following: Click the configure icon for the interface, click the Advanced tab, check the box titled Enable Multicast Support. Click OK.

Now click Firewall from the left hand drop down, and when the menu expands click Access Rules

Select the radio button titled Matrix at the top

For each zone that was configured in step 8, select the rule fromZONE to MULTICAST

Ensure that there is an ALLOW rule with ANY listed for Source, Destination, and Service. If there is not an ALLOW ANY ANY ANY rule, create on. Repeat for each Zone that was configured in step 8

Update: as James points out below, you also need a traditional bi-direction Allow rule between both zones.

Testing

From your Client, open Chrome and download the extension “googlecast” if it’s not already installed.

Verify that when you try to cast a tab, the Chromecast that’s located on the other interface/zone is listed.

I’ve been migrating some of our slower customers away from Exchange 2003 recently, and I ran into a issue that took me 3 days to figure out. I was getting the following erorr on the 2010 Server: Couldn't find an Exchange 2010 or later public folder server with a replica for the free/busy folder: EX:/O=FIRST ORGANIZATION/OU=FIRST ADMINISTRATIVE GROUP despite successfully running the AddReplicaToPFRecursive.ps1 on the \NON_IPM_SUBTREE\ Top Public Folder. On the 2003 server, all folders, including the Free/Busy folders displayed two replicas, one for 2003 and one for 2010.

I eventually moved all replicas from the 2003 server and removed the PF database hoping that it would force any replicas that remained and perhaps weren’t being displayed properly over to the 2010 server, and I was prompted to do just that, but still I continued to receive the error stated above.

The reason that I’m posting this is not that it’s a new issue, it seems like it’s a pretty common problem, but rather the hard time I had finding the correct resolution online. Perhaps my search terms were off, but in case The way to solve this is as follows: