What are the additional features of the ACET as compared to CAT? Let's take a look…

ACET is a spreadsheet

While the FFIEC Cybersecurity Assessment Tool (CAT) was called a tool, it was released in the form of a PDF download. This forced financial institutions to complete the tool manually on paper, to develop their own mechanism to electronically complete the assessment, or to use third-party software such as Tandem to complete the assessment. The ACET was released by the NCUA as a spreadsheet, partly, to provide credit unions a functional option for completing the CAT.

ACET includes a dashboard

The first sheet in the ACET spreadsheet is a dashboard. The dashboard provides summary information of the credit union, a completion status for the inherent risk profile and cybersecurity maturity, and inherent risk levels. The dashboard is helpful to let the credit union and their examiner see the completion status of the assessment.

ACET has an Admin sheet for NCUA examination use

ACET was primarily designed to be used during NCUA examinations; therefore, the NCUA included an Admin sheet to be used by NCUA examiners. This sheet is primarily used to calculate and track review hours used during the examination process.

ACET contains a document request list

Since ACET is used as an examination tool, or work program, a document request list was added. The current version (v032618) of the ACET does not have a hyperlink from each document request to any inherent risk questions or maturity statements. However, validation text added to these statements, in many cases, does reference back to the requested items.

ACET adds validation text to inherent risk statements

Answers to the inherent risk profile statements help institutions determine their overall cybersecurity inherent risk. ACET expanded these statements to include "Validation Approaches" for each inherent risk statement. The validation approaches language describes what an institution or examiner should review to answer, or validate the answer to, an inherent risk statement. In many cases, these validation approaches reference back to documents you can review from the document request list.

ACET summarizes maturity in a Maturity Details sheet

The ACET includes a sheet called "Mat. Details." This table provides a summary of the institution's maturity. Percentages of "Yes" answers are displayed by Component for each maturity level. This view provides a snapshot of the intuition's cybersecurity maturity across all of the Components.

ACET provides additional reporting fields for declarative statements

The ACET includes additional columns to help institutions document evidence or additional information related to each cybersecurity maturity declarative statement in the "Domain" sheets. The first additional column, Comment [Required for Yes(c)], was added for credit unions to have a place to explain the "Yes with compensating controls" answer. Two additional columns, Reviewed and Suggested Edits, were added to help examiners when reviewing the ACET.

ACET incorporates a guide with additional commentary and mappings

The ACET includes a sheet named "Guide" with additional commentary and mappings to help an institution or examiner understand and answer the cybersecurity maturity declarative statements. The additional columns include:

Comment: commentary with additional details describing what is expected from the declarative statement and what value the control has on cybersecurity.

Examination Approaches: describes what an institution or examiner should review to answer or validate the answer to a declarative statement.

Baseline Mapping: mapping declarative statements to the FFIEC IT Examination Handbooks. These are the same mappings in the CAT Appendix A.

NIST Mapping: mapping declarative statements to NIST.

ACET and Tandem

When the FFIEC Cybersecurity Assessment Tool (CAT) was first released, Tandem developed an application to aid in its use. Now Tandem has updated the tool to include the additional ACET features and to allow Credit Unions to complete the assessment through Tandem and download the results in the ACET spreadsheet format. The Tandem SaaS comes in both a free and paid version. Join more than 1,000 other financial institutions and sign up for the free Tandem Cybersecurity Assessment Tool today by visiting conetrix.com/tandem/cybersecurity-assessment-tool-ffiec.

If you have ever been annoyed with Office AutoCorrect changing words like "VMware" to "Vmware", you'll be relieved to know there is help for you. In any Office application, go to File->Options->Proofing-> AutoCorrect Options->Exceptions->INitial CAps. There you can add the string in question (ex. "VMw") to the list to stop Office apps from constantly correcting your typing "errors".

I've increasingly had issues getting Excel to open other Excel files if I already had one open. I noticed it happened every time I was working in one of my spreadsheets that contained macros. However after some research, I discovered that Microsoft has intentionally designed this so that if Excel thinks you are editing a cell, it will not allow you to open any other Excel files (even if they are unrelated).

Although there isn't really a true solution, if you hit enter, or simply get out of the cell as if you're editing it, or hit save, you should be able to open other files.

We had a banking customer with a FiServ application consistently crashing under Windows 10. The crash would always display a .Net framework error. All users of this application were having issues with it, but the severity changed from user to user. One user would crash once every couple of hours, while the other would crash once every other day. No user was doing the exact same thing, and no other errors were showing before the crash. It seemed to be a completely random occurrence.

FiServ support could not recreate the issue and advised we update to Windows 10 1803. While updating a PC to test this solution I checked the event logs and noticed a printer kept trying to map every 60 minutes and fail. It just so happened that whenever this printer failed to map, the .Net error would also show up on event logs. Group Policies were refreshing and triggering the printer mapping error. I launched the application and ran a "gpupdate" and sure enough, the application crashed. I looked into the GPO's and found the drive that was mapping the location of the program was set to "replace" instead of "update" or "create". This was causing the file path to be lost every time there was a group policy update. I changed this drive map to "create" and it resolved the issue.

I have been working with a customer on a file server and domain migration project. The original plan was to move the files to our Aspire datacenter on a server that was in a different domain. Since we were moving domains, we were going to have to recreate the file permissions on the new domain. I typically run Robocopy using the /COPYALL (which is equivalent to /COPY:DATSOU) parameter, but since we did not want to copy the security, owner, or auditing information, I used /COPY:DAT.

After the initial seeding, the customer prioritized some other moves and postponed the file server migration. During that time, the old datacenter suffered a three day Internet outage. After the outage, the customer decided to move the files while client machines remained on the old domain to prevent another outage. This caused us to need to copy the existing permissions instead of the original plan to translate the permissions at the time of migration.

I changed my Robocopy scripts to use /COPYALL instead of /COPY:DAT. Robocopy copied the permissions for the files that had changed or been added since the seeding, but it did not fix the security permissions for the files that had not changed. This is by design as Robocopy only copies permissions when it copies a file. In order to reevaluate the permissions, the /SECFIX parameter must be added. I changed my script to include /COPYALL /SECFIX and it sync the files AND the permissions. This Robocopy takes longer because it has to evaluate security instead of just the files.

To keep files and permissions in sync, you need to use the /COPYALL and /SECFIX. You can add /v for verbose logging. The Robocopy command I used to keep the files and permissions in sync was: "robocopy source destination /COPYALL /SECFIX /MIR /S /E /DCOPY:T /R:0 /W:0 /LOG+:log.log".

What is Colorado Cybersecurity Regulation (HB 18-1128)?

On January 19, 2018, the General Assembly of the State of Colorado introduced House Bill 18-1128, Concerning Strengthening Protections for Consumer Data Privacy. The regulation was signed into law on May 29, 2018 and goes into effect on September 1, 2018.

The new regulation contains four primary sections:

Disposal of Personal Identifying Information

Protection of Personal Identifying Information

Notification of Security Breach

Security Breaches and Personal Information

The first three sections focus on how a "covered entity" can protect personal identifying information (PII). A "covered entity" is defined as a "person" (e.g., an individual, corporation, business trust, etc.) who maintains, owns, or licenses PII in the course of their business, vocation, or occupation.

Section Four shifts some wording around, but repeats the first three sections, replacing the term "covered entities" with "governmental entities."

Does Colorado HB 18-1128 apply to Banks and Credit Unions?

Yes. While the regulation defines PII a couple different ways, both definitions include things a financial institution would "maintain, own, or license" in the course of normal business (e.g., social security numbers, credit cards, debit cards, account numbers, etc.). If you are a financial institution in the State of Colorado, Colorado HB 18-1128 applies to you.

Are Financial Institutions in Compliance with Colorado HB 18-1128?

Let's break this down by section.

Section One: Yes.Financial institutions are already subject to GLBA, so the organization should already have a policy in place that defines the secure disposal of paper and electronic documents containing PII.

Section Two: Yes.

Again, since financial institutions are already subject to GLBA, the organization should already have reasonable security procedures and practices in place to protect PII from unauthorized access, use, modification, disclosure, or destruction.

Section Three: Partially.

Per GLBA, each financial institution should have an incident response policy, program, and/or plan that outlines what the organization should do in the event of a security breach. However, Section Three additionally includes new requirements, specific to the State of Colorado, about classification and notification of a security breach.

For example, Section 3(2)(e) states that if the security breach affected more than 500 Colorado residents, the covered entity must notify the Colorado Attorney General as soon as possible, but no later than 30 days after determining a security breach occurred. This requirement is new and it is specific to Colorado organizations, so it does not likely exist in your current incident response policy, program, and/or plan.

How to Prepare for September 1st

To prepare for the September 1st effective date, it would be beneficial for each financial institution to compare their existing incident response policy with the new requirements in Section Three and make updates, as needed.

For Tandem Customers: The resource also provides information about how the requirements of HB 18-1128 are already addressed in Tandem, including recommendations about how you can incorporate the Colorado-specific requirements into your existing information security program.

What is Tandem?

Tandem is an online information security and compliance software designed to increase security and help financial institutions stay in compliance with GLBA and FFIEC guidance. Tandem is now being utilized by financial institutions across the country and helps by saving both time and money without sacrificing information security, cybersecurity, or compliance.

We had a customer report that all browser windows were closing for users and this was increasing in frequency. Most of the users reporting the issue were at the corporate office, which has about 150 users and is where the IT department is located. I performed a remote session with on the users and confirmed the issue. Internet Explorer, Chrome, and Firefox all would close, not crash, at the same time.

My first thought was that some remote assistance and IT management software they had recently installed was causing the issues. We uninstalled the software and the issues continued. My next thought was that something malicious was on the network and was killing the processes remotely. I moved the PC to the guest wireless network and the issues stopped. After moving the PC back to the internal network, the issues began again. After a while, the issues randomly stopped for this user. I moved on to looking at another user's PC. The IT department did not know of any new devices that had been brought onto the network.

Whatever was causing the issues was obviously powerful enough to kill processes. The browsers seemed to be closing at regular intervals, at the top of the hour and half after the hour. I started Process Monitor, Process Explorer, and WireShark, opened the browser, and waited. As expected, the issue occurred again. I started looking through the WireShark logs and did not see anything odd. I looked at the Process Monitor log and found several cmd.exe processes killing the browser applications. At about the same time I saw the cmd.exe commands that killed the browser processes, I saw nxclient.exe processes that called cmd.exe and ran taskkill commands.

I started searching online and found a blog on the NxFilter support group discussing the same issue. This customer has NxFilter for web filtering for several years. They were using version 5.0 of NxClient, which was before the version mentioned in the NxFilter support group. The NxFilter creator responded to that group and said that the client was doing so to force a refresh the user's session, but that this is not the correct behavior. There was a new version of NxClient that fixed this behavior. Version 9.1.3 of NxClient was current, so I updated the customer to use the newer version. That resolved the issues.

How do you know what due diligence documents to gather from each of your vendors? There are many methods available, but some result in more accurate documentation than others. Today, I'm going to review two of the primary methods and discuss the effectiveness of each method.

Method #1: The Bucket Method

I often see, what I will call, the bucket method.

It Goes Something like This

Imagine you have a list of questions you ask about vendor characteristics, and then you classify that vendor based on the number of questions answered as "yes." For example, a vendor should be considered:

"Level 1" if two or less are answered as "yes."

"Level 2" if three to four are answered as "yes."

"Level 3" if five or more are answered as "yes."

Then, you could define the required due diligence based on the level of the vendor, or based on the bucket in which the vendor is grouped. At "Level 1," collect only a service level agreement. At "Level 2," collect a contract, a confidentiality agreement, and financial statements. At "Level 3," collect all document types (e.g., a contract, confidentiality agreement, financial statements, SOC report, examination report, BCP, etc.).

What Happens Now?

This method seems relatively simple to carry out. But in reality, it can create a lot of unnecessary document exceptions, and occasionally miss opportunities to request relevant documents.

Unnecessary Document Exceptions in a Bucket MethodConsider a vendor who is "Level 3." While five characteristics applied to them, several of the required documents are both unnecessary to request, and at some rate, unreasonable. This results in an exception record to explain each case and ultimately, requires more effort from you, as the vendor manager, to oversee the relationship.

Missed Opportunities for Requesting Relevant Documents in a Bucket MethodConsider a vendor who is "Level 2." While only three characteristics applied to the vendor, one of them is very important. If this vendor were to be unavailable for 24 hours, it would be detrimental for our business. We should get their BCP, but we did not because it was not required for "Level 2" vendors.

What This Means for You

The bucket method costs a lot of time and effort even though the labelling process seems quick and simple.

Method #2: The If-Then Method

Instead of the bucket method, consider the more accurate if-then method.

It Goes Something like This

Imagine you have a list of questions you ask about vendor characteristics. You could say that if you answer Question A as "yes," then you should collect a specific type of document related to the effects of that characteristic, Document A. Here are a few examples to consider:

If a vendor performs critical functions or provides critical services, then you should get a service level agreement.

If a vendor uses subcontractors in the performance of critical functions, then you should get their Third party Due Diligence of Subcontracts.

If a vendor stores customer information, then you should get a SOC report.

What Happens Now?

By using the if-then method, you only gather the documentation that is appropriate to the third party relationship. This method can be continually refined. If you notice you are creating a lot of document exceptions for a specific type of document, revisit the question you are asking that instigates this requirement. Consider what assumptions are being incorrectly made about the characteristic's effects. Update your list to appropriately account for this.

Let's say you thought, "If a vendor stores, transmits, or accesses customer data, then I should get their SOC report." You would quickly find that not every vendor who can access your customer's data is going to have a SOC report, and that the SOC report is quite unnecessary for the service you are receiving. In this case, you could create two separate questions. One question would be about storing customer data, in which you would require a SOC report. Then another about accessing and transmitting customer data, in which you would require a confidentiality agreement, but not a SOC report. Making this adjustment would greatly reduce the number of documented exceptions.

What This Means for You

In Summary

While both methods provide standardized ways to gather due diligence documentation from vendors, the bucket method can actually cause more problems for your vendor managers. By using the if-then method, you can manage your vendors based on the services that are being provided to you and easily change your program to meet the developing needs of your environment. Couple this method with the Tandem Vendor Management Software, and increase the efficiency in which you conduct your program.

As a part of a recent data center move we had to reconfigure several APC management cards. The first thing that I did to each of these NMCs was to reset to factory defaults and update the firmware.

This is a fairly simply process normally. Connect the appropriate serial cable, connect to the comm port, press the reset button a couple of times, and log in with the default credentials (http://www.apc.com/us/en/faqs/FA156075/). Once logged in, you can use the command to factory_reset or format the card and bring it back to factory settings.

In one case, however, the card didn't survive the factory reset. In fact, it appeared the card had started to boot, but never finished the booting process. By changing the baud rate around in my settings, I was able to connect to the BootMonitor using 57600 baud, 8 data bits, no parity, no flow control. It was also at this point that I saw an error related to the checksum of the AOS firmware which was preventing the card from booting. We had another NMC that I could swap to if I needed to, but I wanted to see if I could simply reload the firmware onto the card and get everything working again. Fortunately, APC has an article for that: http://www.apc.com/us/en/faqs/FA293874/

Using TeraTerm and XMODEM, I was able to upload the bootmon, AOS, and application module firmware files (in that order) to the NMC. Once that had finally finished, simply rebooting the NMC brought everything back online.

This also had the positive side effect of updating the firmware on the card because I wasn't able to download old firmware files from APC's support site. From there, I could complete my setup process and bring everything back online.

We recently moved a customer from a datacenter at one of their locations to a large datacenter in the Dallas/Ft. Worth area. One of the devices we moved was a Meraki MX84 being used as a VPN concentrator. A VPN concentrator works by extending the network the VPN concentrator is on to the access points. Basically, wireless clients at all locations get an IP address on the same layer two network. This is important for a few reasons. First, the VPN concentrator needs to be in it's own VLAN/DMZ. Second, something on the layer two network the VPN concentrator is connect to needs to be handing out DHCP addresses. In our case, we used a Fortigate UTM to run the DHCP server for that subnet. Third, traffic needs to be allowed outbound to the Internet from all clients on the VPN concentrator layer two network so clients can connect to the Internet. The traffic is tunneled from the access points to the VPN concentrator, so the traffic does not intermix with the normal network traffic.

One of the issues we had was that the access points would not create the tunnel back to the VPN concentrator. After talking to Meraki support, we found that the issue was that the access points and the VPN concentrator would not connect to each other if their public IP address was the same. This does not work because Meraki uses the same technology to build the VPN from the MX to the access points as they use to build a VPN mesh between MX devices. Our devices were both using the default overloaded outbound NAT rule, so they were coming from the same public IP address. The solution is to make the MX come from a different public IP address, which can be accomplished via an inbound and outbound NAT statement. After we made this change, the access points connected to the VPN tunnel and wireless began to work.

One other thing to note is that the access points will not broadcast SSIDs if the VPN to the concentrator is not up when configured to tunnel traffic through a VPN concentrator. This can be helpful when troubleshooting wireless when there are not clients at the location of the access points.