Pages

Friday, December 9, 2016

I have gotten into multiple discussions on the topic of TLS 1.0. The result always seems to end up in no change of anyone position.

There are a few agreed to points:

SSL is fully forbidden.

TLS 1.2 is best

TLS 1.0 and 1.1 are not as good as 1.2

Bad crypto algorithms must not be used (e.g. NULL, DES, MD5, etc)

However some people are taking a policy decision that TLS 1.2 is the ONLY protocol. They are allowed to make this policy change, as long as it doesn't impact others that can't support that policy

I have no problem with a war on SSL. I simply have a realist view on available implementations of TLS 1.2 on platforms where software is available to run. I would love for everyone to have the latest protocols, and for those protocols to be perfectly implemented. Reality sucks!

Standards Recommendation on TLS

What is expressly frustrating is that they point at standards as their justification. YET those standards explicitly allow use of TLS 1.1 and TLS 1.0 in a very specific and important practical case... that is wen the higher protocol is not available.

It is this last clause that seems to be escaping recognition.

The 'standard' being pointed at is IETF (the writers of the TLS protocol) RFC7525. This isn't just an IETF specification, it is a "Best Current Practice" -- aka BCP195 -- May, 2015

3.1.1 SSL/TLS Protocol Versions

It is important both to stop using old, less secure versions of SSL/
TLS and to start using modern, more secure versions; therefore, the
following are the recommendations concerning TLS/SSL protocol
versions:
o Implementations MUST NOT negotiate SSL version 2.
Rationale: Today, SSLv2 is considered insecure [RFC6176].
o Implementations MUST NOT negotiate SSL version 3.
Rationale: SSLv3 [RFC6101] was an improvement over SSLv2 and
plugged some significant security holes but did not support strong
cipher suites. SSLv3 does not support TLS extensions, some of
which (e.g., renegotiation_info [RFC5746]) are security-critical.
In addition, with the emergence of the POODLE attack [POODLE],
SSLv3 is now widely recognized as fundamentally insecure. See
[DEP-SSLv3] for further details.

o Implementations SHOULD NOT negotiate TLS version 1.0 [RFC2246];
the only exception is when no higher version is available in the
negotiation.
Rationale: TLS 1.0 (published in 1999) does not support many
modern, strong cipher suites. In addition, TLS 1.0 lacks a per-
record Initialization Vector (IV) for CBC-based cipher suites and
does not warn against common padding errors.
o Implementations SHOULD NOT negotiate TLS version 1.1 [RFC4346];
the only exception is when no higher version is available in the
negotiation.
Rationale: TLS 1.1 (published in 2006) is a security improvement
over TLS 1.0 but still does not support certain stronger cipher
suites.
o Implementations MUST support TLS 1.2 [RFC5246] and MUST prefer to
negotiate TLS version 1.2 over earlier versions of TLS.
Rationale: Several stronger cipher suites are available only with
TLS 1.2 (published in 2008). In fact, the cipher suites
recommended by this document (Section 4.2 below) are only
available in TLS 1.2.
This BCP applies to TLS 1.2 and also to earlier versions. It is not
safe for readers to assume that the recommendations in this BCP apply
to any future version of TLS.

Note the last bullet tells you that you yourself must support TLS 1.2. A good thing if your platform allows it.

Conclusion

Yes, it would be great if everyone had all the latest protocols, and that all those protocols were implemented without errors... BUT reality gets in our way. Especially so with Interoperability where reality is that we are trying to achieve Interoperability.

UPDATE: Reader should note that RFC7525 is very readable and full of far more recommendations than just TLS version. Including detailed discussion of cypher suites, and authentication types, etc. There is no perfect solution or configuration. Security is RISK MANAGEMENT, and needs continuous care and monitoring by an expert.

Tuesday, December 6, 2016

I 'figured' that I could continue to use my existing United Healthcare web login account to access both the old and new insurance account. Turns out this is not the way they do it. They want me to create a new web login for the new insurance account. I guess this is logical, and clean for them. It is inconvenient for me to have two accounts at the same web site, but it is also possible.

Chat session with UnitedHealthcare

Paula B. has entered the session.JOHN MOEHRKE: Hi Paula.Paula B.: Thank you for being a loyal member with UnitedHealthcare. How can I help you today?Paula B.: How are you?JOHN MOEHRKE: I have changed employerJOHN MOEHRKE: My new Employer also uses UHCJOHN MOEHRKE: so, how do I get my web login to recognize this new account?Paula B.: I understand. For the website to recognize your new account through your new employer, you will need to re-register with your new account information.JOHN MOEHRKE: so... I need to create a new login user? Or is there some process I can use to use the current login?Paula B.: No, I'm sorry, you cannot use the old information. If I am not mistaken, it will continue to be associated with the old account.JOHN MOEHRKE: okay. so how do I close the old login account? Meaning, how do I prevent it from ever being used again?Paula B.: Once you create your new and everything has been update throughout all the databases that old account will no longer be active.

What I was worried about is that after I stop using my old login, their is risk that the account is not monitored and thus possible to be attacked. The attack would need to avoid the normal detection on accounts. But as we have seen this week with Credit-Cards; a smart attacker figures out was to avoid detection. In the case of Credit-Cards, they used many storefronts to try various codes. In the case of a user login, they might simply try a small number (1-3) attempts each day, presuming the detection resets each day. Given that I would not be logging in occasionally, as I have abandoned the account, the attacker has years and years to try.

The good news is that United Healthcare has a policy that covers this. They know that the account is explored. Their login shows me this. They allow me to login for 18 months, so that I can get to old information. Often times this old information might be needed for TAX purposes. So, 18 months is reasonable. After 18 months they totally disable the account. I tried to get details on just what this means, but given the responses I did get up to this point gives me some comfort that they did this right.

Paula B.: Once you create your new and everything has been update throughout all the databases that old account will no longer be active.JOHN MOEHRKE: when you say... no longer be active... does that mean that it would be impossible to log-in to it? Sorry to be specific, I am a Privacy/Security expert, and don't like abandoned accounts that have healthcare information within them. If I stop using it, I can't tell if an attacker is trying to break in.Paula B.: I understand.Paula B.: You will have access to it for up to 18 months. After that point, the information will not longer accessible on myuhc.com.JOHN MOEHRKE: okay, so that is a specific policy? I like that answer. It gives the user (me in this case) a chance to get old information I might need... while having a specific deadline. Thanks.JOHN MOEHRKE: can you point to where that policy statement is written? (I trust, but... as I said, I am a Privacy/Security expert... so I like to verify)Paula B.: You're welcome! I understand, but I am unable to point to where that is written. That is a UnitedHealthcare standard.JOHN MOEHRKE: okay. thanksPaula B.: If there is nothing else, thank you chatting today. I hope you have a great day!

Wish I had a policy fragment to point at... I guess I should set a reminder to try in 18 months...

Saturday, December 3, 2016

This is a use of the IHE published De-Identification Handbook against a use-case. The conclusion we came to is an important lesson, that sometimes the use-case needs can't be met with de-identification to a a level of 'public access'. That is that the 'needs' of the 'use-case' required so much data to be left in the resulting-dataset, that the resulting-dataset could not be considered publicly accessible. This conclusion was not much of a problem in this case as the resulting-dataset was not anticipated to be publicly accessible.

The de-identification recommended was still useful as it did reduce risk, just not fully. That is that the data was rather close to fully de-identified; just not quite. The reduced risk is still helpful.

Alternative use-case segmentation could have been done. That is we could have created two sets of use-cases, that each targeted different elements while also not enabling linking between the two resulting-datasets. However this was seen as too hard to manage, vs the additional risk reduction.

Wednesday, November 30, 2016

My last article was regarding if XUA (SAML) was useful in a Service-to-Service SOAP exchange. The same question came to me regarding FHIR and http REST. It was not as well described, as it was in a phone call. But essentially the situation is very similar. There are two trading partners that have an agreement (Trust Framework) that one will be asking questions using FHIR http REST interfaces of the other party.

Using Mutual-Authenticated TLS

The initial solution they were thinking of was to simply use Mutually-Authenticated TLS in place of the normal (Server Authenticated) https.

This solves authentication of the server to the client, and authentication of the client to the server.

This solves the encryption and data integrity (authenticity) problem.

Thus keeping EVERYONE else on the internet out of the conversation.

The negative of this is that one must manage Certificates. One issued to the Client, One issued to the Server. The more clients and servers you have, the harder the management of these Certificates become. As this number approaches a large number (greater than 2 by some peoples math, greater than 20 by others) it becomes more manageable to involve a Certificate Authority. You can use a Certificate Authority from the start, but it is not that critical. Some Operational-Security-Geeks will demand it, but often they are misguided..

So at this point we have a simple solution, that addresses the issues. It looks very good on paper.

Problem with Client authenticated TLS

There is however an practical problem that might cause you pain. It caused me pain as soon as the project I was on tried to implement this at scale. At scale, one tends to use hardware assistance with the TLS protocol. There are many solutions, in my example it was F5 based load balancing hardware based TLS support. These fantastic device make TLS --- FAST --- But their default configuration doesn't include Mutual-Authenticated TLS. They have Mutual-Authenticated TLS, it can be configured.

The next problem is that the TLS acceleration box strips off TLS; meaning that my web-server gets no indication of the identity of the client. If the situation is that I don't have different Access Control rules, this might not be a big problem. However I don't have a way to record in the Audit Log who the client is. If the client is exactly ONE system, then I can guess that it is that system.

The good news is that the TLS acceleration box can likely be configured to pass along that client identity from that TLS client authentication. In my case, there was a chapter in the F5 documentation that told me how to write the script to be inserted in the F5 so that it would extract the Client Identity, and stuff it into a http header of my choosing. Thus my web server could look at that header for the identity of the client. Of course I had to make sure that the header NEVER came from the external world, or it wouldn't be an indication of a authenticated client. This is a kludge, but a defendable one.

Using OAuth is better?

So, using OAuth for client authentication, while just using normal https (server only authenticated TLS) is far easier to configure on the server. In fact it is supported by the default cloud stacks.

The advantage of this is that the OAuth token gets forwarded directly to your web stack, where it can be used for Access Control and Audit Logging. It can be verified, based on OAuth protocol (and all the security considerations left open by OAuth). Really nice for both service orchestration and 'scale' is that it can be further forwarded to sub-services which can validate it, use it for access control, and audit logging. This is really important feature if you have a nice modular service oriented architecture.

The drawback of OAuth is that you must include an OAuth authority in the real-time conversation. Where as with Mutual-Authenticate-TLS, the certificate is issued once and good for a couple of years; the OAuth token is often only good for 24 hours, or 8 hours, or less. You could issue 2 year OAuth tokens, but that is unusual.

I need to note that in order to use OAuth you do need to deal with registering the client 'application'; which is often done via a static secret (password like), or via a certificate... Everything ultimately does come down to a secret or certificate. I would recommend certificate, but realize that is not the common solution. Most implement only secret key.

Both is better?

Not really. The only benefit you get by using both Mutually-Authenticate-TLS and OAuth is that the hardware accelerated TLS box (F5) can reject bad clients with less overhead on your web-stack. This is a benefit, but you need to weigh this against the cost of certificate issuing and management.

It is however better in that it has the fewest hacks to get it to work fully.

Conclusion

As easy as http REST is, aka FHIR, it is very hard to get security right. Sorry, but the vast majority of http REST is either completely non-secured content, simple security. Such as Wiki, Blog, or Social networks. None of them are dealing with sensitive content that is multi-level sensitive. It is this that makes healthcare data so hard to secure and respect Privacy.

Friday, November 25, 2016

I got an email question asking if the use of XUA is proper for situations of service-to-service communication.

I am not sure how far XUA really got in the IHE world, but we have an HIE in XYZ [sic] that seems to want to implement it on every IHE transaction, even those without a document consumer. Our role with them is strictly at a system level as a document provider and of course we are using Mutual Authentication
Reading the XUA spec it seems that IHE was gunning for consent authorization of a document consumer and those transactions, though it never actually came out and said "just" those transactions. SO my questions.
Does the IHE have a stance on this ? Are all transactions(XDS and PIX PDQ) to use SAML ? Or is the spirit of the law about consent and document consumption calls ?

How much is XUA used... very hard to know. But the concept of XUA is simply that a requesting party identify the requesting agent using SAML. Where that agent is usually a human in an interactive workflow. If we recognize that XUA is simply the use of SAML, no any specific subset, then I would say it is very universally used. In many cases, the server is ignoring it completely, in a few more it is doing nothing but recording in an audit log, but in a few it is being used in an Access Control decision. All of these are the vision of XUA.

XUA is also not tied to XDS, it is tied to SOAP transactions. These are mostly found in the XDS family (XCA, XDR, XDS), but also exist in some patient lookup transactions like (XCPD, PIXv3, and PDQv3). However there is no clear binding between SAML and HL7 v2 transactions like (PIX, and PDQ). It is not clear how one would identify the user in cases of HL7 v2. Note that you could identify a user in PDQm, and PIXm using IUA; but that is a different blog - Internet User Authorization: why and where.

XUA does include a number of optional attributes; specifically use-cases that when needed shall be satisfied a specified way; but the use-cases are not mandatory. There are indeed a few consent focused use-cases in this optional space. If the client needs to inform the server of a specific consent that authorizes access, THEN it is communicated thus. Other more commonly used use-cases are those around the name of the user, and the purpose of use for the request.

XUA is independent of consent, although many times consent is specific down to the user.

XUA is most often useful when the requesting party is a human, but that is not the only useful scenario.

However it is not unusual to use XUA to identify the service that is making a request, even if it is redundant to the TLS 'client' certificate. The SAML assertion is more expressive, and including it allows for future expansion to utilizing this more expressive capability.

On a practical perspective, it is common for TLS to be terminated at the very edge of a cloud infrastructure. It certainly authenticated the calling system. But being terminated in a TLS specific piece of equipment, that identity is not available for Access Control checks that will happen later. This kind of a configuration simply can't make access control decisions based on the TLS client identity (Or can't without some hacks in the service stack).

Conclusion

So I think it is reasonable that you are being asked to include a SAML assertion in all requests, even those that are automated and for which the only identity you can claim is the automated service itself. It is this analysis that does need to be done, what triggered the request. That agent that triggered the request is the one that needs to be identified. It is likely today to be a background task, not a human. Background tasks can be identified in SAML just as well as humans can.

Monday, November 21, 2016

Sorry to my audience for not getting much from my blog lately. The transition to working life again has been distracting me. I am very sick of forms. I realize that I benefit from the forms being online using browser from the comfort of my home. I can only imagine a few years ago when all of this training and forms would be in-person and on paper.

Some blog topics:

IHE (ITI and possibly others) Plans for next year...

Finish out my Privacy Consent topic with detailed breakdown of the abstract (done) into

IHE-BPPC,

IHE-APPC,

HL7-CDA-Consent,

HL7-FHIR-Consent,

Kantara-Consent-Receipt, and

OAuth and UMA

IHE role in a FHIR world

Adding sensitive data to a Health Information Exchange

Something useful about Blockchain...

Something assertive about OAuth and FHIR

I often write an article based on some random question I got via email.. so please ask me random questions. You can try to use my blog "Ask Me A Question"

Tuesday, November 1, 2016

I start my new job today. No office to go to, home is my office. I now work for a consulting organization "By Light Professional IT Services, Inc" that has a 4 year contract supporting and enhancing the Health Information Exchange capability of the VA healthcare (VHA) to the rest of healthcare. I am a Standards Architect, doing the same thing I did for GE, standards creation and use. Working for the government I have had a few dozen forms to fill out; get fingerprinted; then hours of training.I will still be blogging about the standards developments, and implementation guidance on implementing those standards. I likely will be covering Privacy and Security less; heading more into transports and content.

About Me

The information posted here are mine and not necessarily represent By Light Professional IT Services Inc. I am a Standards Architect specializing in Standards Architecture in Interoperability, Security, and Privacy for By Light Professional IT Services Inc. Primarily involved in the international standards development and the promulgation of those standards. I am a co-chair of the HL7 Security workgroup, a member of the FHIR Management Group, and active in IHE. I have participated in ASTM, DICOM, HL7, IHE, ISO/TC-215, Kantara, W3C, IETF, OASIS-Open, and other. I was a core member of the Direct Project specification writing, authoring the security section and supporting risk assessment. I am active in many regional initiatives such as the S&I Framework, SMART, HEART, CommonWell, Carequality, Sequoia (NwHIN-Exchange), and WISHIN. I have been active in the Healthcare standardization since 1999, during which time authored various standards, profiles, and white papers.

Surely there are other copyright and trademarks that I should recognize, but everyone else seems to be reasonable; expecting readers of blogs know that I am not trying to claim or take ownership of their copyright and trademarks.