Month: June 2014

Dynamic Key Exchange Models

I’ve had a number of people ask me recently about how to implement Dynamic Key Exchange models. Specifically, I’m talking here about ISO8583-based financial payment gateways. This post pertains to situations where you’re acting either as the Card Issuer (in which case you’re receiving payment transaction requests from the gateway) or the transaction acquirer (in which case you’re sending payment transaction requests to the gateway in order that they route it for appropriate authorization decisioning).

There’s some terminology to square away first:

Local Master Key (‘LMK’) – This is the key you store in the HSM in order to encrypt and do software-based storage of the current Working Keys (and Base Derivation Keys if you’re using DUKPT). Also called the Master File Key (‘MFK’)

Zone PIN Key (‘ZPK’) – The ZPK is what’s used to encrypt the PIN blocks that traverse the wires between institutions. Also referred to as the ‘Working Key.’ This is the key that the Dynamic Key Exchange is acting upon. You’re obligated to change the Working Key at agreed-upon intervals (I typically advocate every 12 hours).

Zone Master Key (‘ZMK’) – Think of the ZMK as the key transportation vehicle. It’s the key that the two parties use to encrypt and exchange new ZPKs. This key is established via a key ceremony. You keep a copy of the ZMK encrypted under the LMK in a file somewhere (you’ll see how it’s used here further down this post). Also called the Key Exchange Key (‘KEK’).

From the moment you start planning discussions with the gateway, establish RIGHT AWAY that you want field-by-field level specifics of how the Dynamic Key Exchange is to be performed. It’ll be within the context of some Network Message Exchange (e.g., 0800/0810), but that’s not granular enough – you need to know the thing down to the field-content level.

Scour the documentation you’ve been provided to see if those details are in there. I’ve done two different gateway projects recently, and in both cases the Key Exchange details were notably absent from the doc. But, that doc exists somewhere within the gateway institution. Track it down. Get your hands on it.

Knowledge of the Key Exchange model is – by design – not widespread throughout the gateway provider’s project personnel. Insist on getting their expert in on at least one of the planning calls. Make note of this person’s name and contact details. Establish that information channel. This is a critically important point to your success.

At a high level, there are two models:

You request a new ZPK from the gateway, and they provide it in the response. [I call this the ‘Pull‘ model (for obvious reasons – you pull the key from them).]

The gateway sends you a new ZPK and you respond with a message indicating success or failure. [This, by contrast is the ‘Push‘ model.]

Your implementation will be one of those.

Now, I’ll provide two examples, one push, one pull.

The sequence of events is:

The gateway sends us a new ZPK (under ZMK) in an 0800 (MTI) Network Request.

We obtain the ZMK (under LMK) from our files.

We use the cryptograms from Steps 1 and 2 to create the appropriate command to the Key-Up (here, a ’12’)

We get the response from the Key-Up (the ’13’) and validate that the Check Digits match those provided by the Issuer.

Assuming the check in Step 4 is okay, we store the result (the ZPK under LMK) as the new Working Key.

We send an 0810 (MTI) Network Response back to the Issuer (Note that Field 39 on our response is ’00’ – indicating success).

There’s so much detail here worthy of comment. I’ll touch on a few things (these are the types of detail you want to bring to the surface in your reviews):

This gateway uses ‘162’ in Field 70 to tip to us that it’s a Key Exchange in play.

Note how we have to pluck the incoming cryptogram out of the esoteric morass of Field 123.

We have to construct an equally cryptic Field 123 on our response.

Here is the pull model:

We request a new key from the Gateway in an 0800.

The new key (ZPK under ZMK) comes back in an 0810.

We fire off an ‘FA’ to the Thales 8000.

We get the ‘FB’ back and validate the check digits.

If okay, we store the result (the ZPK under LMK) as the new Working Key.

Now, since we’re the initiator here we have to have a way to determine when to trigger the exchange request. We do that through a channel Logon Manager.

You get the idea, I hope! Nail down all those details in order to maximize your chances of success. Otherwise, feel free to beat your head against a wall, because that’s what will happen if you don’t get this information.

On PIN-enabled Debit/EBT transactions sent in from an acquirer’s point-of-sale location, your payment switch application must perform a PIN translation, typically transforming an incoming DUKPT PIN block from the POS device-initiated request into a outgoing Triple DES-encrypted PIN block that makes use of an established Zone PIN Key (“ZPK”) which would have been previously established via a dynamic key exchange with your Debit/EBT gateway provider.

[The remainder of this example assumes you’re using a Thales (formerly Racal) Hardware Security Module (“HSM”)….] Using strict Thales parlance, this variant of a PIN translation request is a request to “translate a PIN from *BDK encryption to interchange key encryption.” This topic is covered in Section 27.2 (page 2-185) of the Thales reference document entitled “Host Security Module RG7000 Programmer’s Manual” (Thales reference number 1270A514 Issue 5). The CI/CJ exchange should be handled as follows: — CI —Message header – You can use as you see fit. Value is echoed back in CJ. Note that the length is constant and must be configured in HSM by administrator. Command code – CI BDK – The Base Derivation Key “in play” for this transaction. In my installations we’ve set this up as follows…

Selected the “1A + 32H” option, where the ‘1A’ value should be set to ‘U’

Configured such that the first six positions of the KSN represent the “key name” of the BDK injected into the PIN Pad at the transaction origination point (an acquirer can use a number of BDKs in their terminal population).

ZPK – Your current ZPK Cryptogram (obtained dynamically via a key exchange with your Debit/EBT gateway partner) and stored under your Local Master Key (“LMK”). In my installations, we’ve used the “1A + 32H” option, where the ‘1A’ value should be set to ‘U’. KSN Descriptor – This value is a bit esoteric and refers directly to the make-up of the KSN which follows. So to understand the descriptor, it’s first necessary to talk a bit about the KSN (the next field in the CI command layout). Here’s a typical KSN implementation where the acquirer has chosen a 16-position scheme:

Positions 1 – 6: The name of the BDK injected into this device

Positions 7 – 11: The device ID

Positions 12 – 16: The transaction counter

[Note that the KSN implementation has to be in synch between the PIN pad and your host-side implementations in order for this to work.] The ‘rules’ for a KSN construction are as follows (reading from left to right in the KSN): a. The ‘base derivation key identifier,’ which is mandatory and five to nine (Hex) positions in length. b. A ‘sub-key identifier,’ which Thales says is ‘optional’ but in practice is ‘reserved for future use’ (and therefore always set to zero). c. A ‘device identifier’ (mandatory), which is two to five Hex digits in length. d. A ‘transaction counter’ (mandatory) which essentially is the part “left over”. So, in the example here, the client with a 6, 0, 5, 5 implementation. With this information in hand, the KSN Descriptor (a three-position value) is better described as XYZ, where: X = base derivation key identifier length Y = sub-key identifier key length (will be zero) Z = device identifier length So, in this context, the ‘605’ submitted in my example is better visualized. ‘605’ says that the 16-digit KSN consists of a 6-position BDK ID, a 0-position sub-key, a 5-position device ID, **AND** (what’s remaining basically) a 5-position transaction counter. [NOTE: Remember that this post applies *specifically* to the Thales/RACAL implementation of PIN translation] Now, with this informatation in hand, we can introduce the next field, the KSN itself…KSN – Using the layout from the descriptor, a typical KSN at this acquirer might be 123456000A8001D4 where: ‘123456’ is the BDK indentifier; ‘000A8’ is the Device ID; and ‘001D4’ is the transaction counter. The BDK name embedded in a particular KSN string must find a match within your BDK cryptogram list (which you need to keep loaded into your payment switch’s encryption database). If a match is not found in the encryption database, then set your Internal Result Code to “Invalid BDK” and end the transaction. If found, the value you retrieve goes into the BDK field (as described above). Source encrypted block – The PIN block plucked from the POS device request (this is a 16H value; no ‘1A’ indicator is required). Destination PIN block – In my installations, we typically use ANSI format, so we set this to ‘01’ to signify ANSI format code Account Number – Right-most 12 positions of the PAN excluding the check digit Typically, that is the END of the required CI request message (remainder of the fields in the Thales spec are not mandatory).— CJ — Message header – Echoed back from CI usage. Response code – CJ Error Code – Only ‘00’ should be accepted as an exchange that “worked.” PIN length – Although this field is not used to build the 0200 message formatted for your Debit/EBT gateway, a value like ‘04’ or ‘05’ here are a pretty good indication that the translation occurred successfully. Encrypted PIN – The PIN block that will be used to build the 0200 message formatted for your Debit/EBT gateway (this is a 16H value; no ‘1A’ indicator is required). Destination PIN block – Echoed back from the device as ’01’ format code Typically, that is the END of the response message (remainder of list in the vendor spec would only be present if they were provided in the CI command request)

Back in the 1980s, we had a simple, bifurcated world: there were credit cards (PIN-less and tied to a credit line); and there were ATM cards (always requiring a PIN and tied to a bank account). Muddying the waters was the advent of the so-called ‘check card,’ which can be thought of as a ‘dual mode’ card – it can be used without a PIN as sort of a ‘secured’ credit card (‘secured’ in the sense that the cardholder is dipping into real money in a bank account) or with a PIN as a debit card.

Now, we get into some rather misleading definitions that this muddying has caused…

When you use that check card with a PIN, it’s called Online Debit. For those of you familiar with ISO8583, that PIN-ed request is going to result in you (as the acquirer) formatting an 0200 (the typical MTI used for a Purchase/Sale) request to the Debit/EBT gateway.

The card issuer (or its authorizer) authorizes that request and treats it as the ‘letter of record’ (my term) to debit the account in its nightly posting cycle.

The Debit/EBT gateway may or may not require the inclusion of that Debit transaction in a nightly extract/settlement file (prepped and sent by the acquirer).

When you use the same card without a PIN, it’s called Signature Debit, i.e., because you sign for the transaction like a credit card – of course, new regulations muddy the waters further: at some merchant categories, a signature is no longer required for purchases of < AUD$ 25, as frequenters of Starbuck’s know.

Now, the ultimate in misleading definitions: Signature Debit is often called Offline Debit, this despite the fact that 99 times out of 100 (you’re not obligated to authorize these, but you open yourself up to chargebacks), the acquirer sends an online transaction request to get an approval decision (for ISO8583-savvy folks, you send an 0100 – the auth MTI – in these situations). Where the ‘offline’ designation comes from is that this online auth is not the letter of record. In these situations, you (as the acquirer) are obligated – assuming the transaction isn’t subsequently reversed – to put these ‘offline debit’ transactions into the settlement/extract file. And it is these items that the Issuer uses to debit the related bank account. In other words, the ‘offline’ appellation here refers to the manner in which the bank account ultimately gets debited, not whether you sent an online request at the time of purchase.

Okay, to further complicate matters: this Offline Debit transaction is often referred to as a Credit . Ugh. Why? Well, you auth it via a 0100, like credit. And, when you stick the related entry into the nightly extract file, you format it as a Credit record. For example, in the FDR North extract file, these transactions get formatted as the Credit ‘D’ record, not the Debit/EBT ‘Q’ record. Indeed, from the perspective of a host-based payment system, you can’t tell the difference between a purchase conducted with a ‘true’ credit card and one conducted with check card in PIN-less mode. In the words of Dan Rather, “If it looks like a duck, walks like a duck, and quacks like a duck, it must be a duck.”

Australia Standards 2805 (AS2805) is the standard for Electronic Funds Transfer (EFT) and Payments in Australia and New Zealand. AS2805 is also used for some implementations in South Africa and SE Asian.

AS2805 is owned by Australia Standards and was developed by various voluntary working groups within Committee IT/5. The implementation of AS2805 standards across all industries is clearly defined by the Australian Payments Clearing Association (APCA) as part of the Consumer Electronic Clearing System (CECS) and detailed in the CECS Manual.

Contrary to popular belief AS2805 is not a rename of the ISO8583 standard in the Australia Standards numbering system, as is the case with most international standards.

ISO8583 was first published in 1987, while AS2805 was published two years earlier in 1985, after a lengthy period of draft and review in Australia, New Zealand and South Africa. ISO8583 consists of three (3) parts:

All three (3) parts of ISO8583 are concentrated on only message formats between devices (EFTPOS and ATM) and an acquiring host. ISO8583 can be seen as a small subset of the AS2805 standard and there is no clear guide for uniform implementation as is the case with CECS. AS2805 on the other hand consist of at least thirty three (33) separate published parts and covers general EFT topics such as:

Card Management & Authorisation

Card Detail Updating

PIN Management

Key Management and Security

Message Authentication

Privacy and Data Encryption

Communications

Message Structure between Devices and Acquiring Host

Message Structure between Hosts

File Transfers

The thirty three (33) AS2805 standards published so far are the following:

Introduction

Generally when entrepreneurs decide to become ATM deployers, they do not have sufficient knowledge about ATM protocols and specifications. This is not needed as there are switching providers that can switch their ATM’s transactions and provide them with adequate reporting.

Following this approach generally leaves them with a massive gap in terms of managing their terminals and merchants correctly. ATM switching providers have the ability to decode the terminal status messages in real time and determine for example is a terminal has ran out of cash, if hardware is busy failing or even if a processing bank has gone offline.

This allows switching providers to have the upper hand over the smaller ATM deployers.

In this post I will show you how to develop a middleware where this pro active approach can be followed even with small ATM deployers. You will be able to see live cash levels of your terminals and monitor ATM transactions.

ATM Languages

ATM’s and EFTPOS devices speak different languages and each terminal manufacturer might have their own custom implementation. Terminal developers normally follow a few types of language implementations: Triton / NDC+ / AS2805 / ISO8583. As all of these will run on TCP using some sort of control protocol (VISAII / ACK Controlled) we need to decode the control elements of the protocol as well.

I will be using the Standard Triton Protocol over VISAII to demonstrate the ability to trace transactional protocols.

Most ATM’s have an TCP/IP setting that will enable you to point the ATM to an IP Address and port. Writing an Server component to listen to this is of course as easy as pie.

Conversion to ASCII

It is an easy approach to write a server application and listen on a post for incoming transactions but we will quickly notice that these message are encoded. This encoding it an easy hurdle (as long as there is no SSL component on the ATM)

StringToAscii Class – The readable encoding

The first class I’m going to create is an class that can convert encoded strings to readable ascii, this need the be based on the control characters specified in your protocol specification.

Each Hex encoded string needs to be mapped to the format you require. (<ETX> / <STX>) and a simple function can provide the conversion.

Reversing this back to an encoded string we just need to pass it to the ASCIIToString Function of course.

Tracing Data

Now that we have all the field from the request from the ATM, the next step is to create a server component that can run and pass this transactional information to our processing party.

Creating a Server is not part of this post, but all source code is available in the gitbub repository. But the basic structure should be that you should create a incoming and sending socket, and poll for connections using a daemon thread. Passing ALL information to the processing party, but saving the requests to the database.

When a specific Transaction type is received from the ATM you can dissect the readable ascii and save the request to database, when the corresponding session has a response then you can of course save the response information with the request.

In my code example I provide the implementation of request and response saving as based on the Triton Specification. If you are using NDC+ or ISO 8583 then this will dramatically change.

Security

The security aspect of this project should be implemented in the raw socket components (SSL Sockets), if there is a need for SSL certificate validation and ACL’s then it should not be difficult to add this to the required class.

PCI DSS require us not to store raw Personal Account numbers in the database so we should in fact use a hash function.

A simple method for doing this is the following:

return pan[:6] + ("*" * (len(pan)-10)) + pan[-4:]

Final Product

The final product is a python implementation of a Transactional Middleware. Allot of changes will be required to make the project work for your environment, but here are basic instructions to make it work for you.

Get a clean Linux installation

Install the LAMP Stack

run the SQL file in the project

copy the project directory to the server

app_get all the project dependencies. (pymysql ect.)

change the config file to point to your database and your switching provider (Middleware.ini)