In my job I have to help our customers build URLs for use in whatever tool they use to send emails to their customers and prospects, so I have a running list of the token to use as the template parameter in the URL for the email address (so they know who is opening the link).

After I upgraded my Mac to El Capitan, I was having some problems installing new packages. I was getting access denied errors when some packages tried to upgrade (and hence remove) existing packages.

For those not in virtualenvs, I had package installed to the default Python site packages directory (/Library/Python/2.7/site-packages in my case). This was causing problems because El Capitan included a new feature called System Integrity Protection (also called rootless) that prevents you (even as root via sudo) from modifying files in a number of system directories, which seemed to be affecting this.

Below are the steps I took to resolve the issue, which is a general outline for how you can resolve this issue for yourself:

Capture a list of all packages you have installed. Use pip freeze > some-file-to-keep-results

Disable System Integrity Protection, which involves rebooting into recovery mode (hold Command+R), launch a terminal, use the command csrutil disable and reboot back into normal mode.

Ensure that all the packages in the system site-packages directory are gone (/Library/Python/2.7/site-packages), remove any remaining packages manually.

Re-enable System Integrity Protection using the same procedure as #2, with the csrutil enable command

Once again rebooted in normal mode again, install a version of python that’s not the one that comes with OS X. brew install python will do that if you have the Homebrew package manager installed. This is better for development uses for Python anyway.

Finally, and this might not be required in your case, pip still wasn’t available via the shell, so I needed to manually create a command to invoke it. I created a script pip in /usr/local/bin and made it invoke the pip package:

With all the challenges of XSS, most often your goal is to prevent unintentional script execution. Ironically, getting dynamically injected scripts to run when you want them to can be as hard as preventing those that you don’t want to.

My problem came up while building plugins for Docalytics documents. Essentially, we are allowing widgets between pages of an HTML5 document viewer. The owner of a document can define HTML and JavaScript to be placed between pages of the document to allow for things like video, surveys, etc.

The entire viewer is written in JavaScript, so these plugins were read by the viewer JS and created dynamically as needed. This meant that if the HTML created by the document owner included script tags, they should be run as needed.

Depending on your scenario, this might now be too hard. jQuery takes care of doing this for you when you add HTML via its methods, such as $(...).html(...). jQuery actually parses the HTML itself, identifies, script blocks, and executes them via eval(...). The problem comes in for scripts with an src attribute, rather than an inline script.

jQuery loads script tags with ansrc attribute via AJAX, and then executes them. This is fine if the script is located on your servers, but in my case the scripts were hosted on 3rd party sites, and the servers weren’t setup for cross-domain requests.

My final solution was based on this this StackOverflow question. I injected the HTML using the raw DOM APIs, then executed a helper function on that node to go back and execute the scripts. My modified version of the StackOverflow answer is below, which handles the case for src attributes on the scripts.

The gist of the talk is that with web frameworks like Rails and Django, data migration is a feature of the data model tools. With App Engine Datastore (now Cloud Datastore) you have to do the work yourself. In the talk I give Python examples of how to update the NDB models, how to use deferred tasks and mapper/mapreduce jobs to update existing entities.

I’m going to preface this post with the fact that I’m not an expert with pf the tool I’m using here to do this. I’ve just hacked together something that works from othertutorials I’ve found online.

By default the App Engine local development server runs on port 8080 locally, which is fine, but our app has some domain regex rules that are hard to test when the URL isn’t similar to how its deployed in production. To make things more realistic I edited my /etc/hosts file to give me “real” domains for my local dev environment. That solves part of the issue but the other part is getting things running on the right port. The first 1024 ports on *nix are restricted, so directly running the development app server on port 80 would be a pain, so I setup port forwarding.

The above linked tutorials got me going in the right direction, but didn’t quite work for me. Here are my steps.

First, create a new rules file in pf.anchors:

sudo vim /etc/pf.anchors/local-appengine

Paste the following in the file and save it (note that you just change 8080 if you are using a different port):

#
# Default PF configuration file.
#
# This file contains the main ruleset, which gets automatically loaded
# at startup. PF will not be automatically enabled, however. Instead,
# each component which utilizes PF is responsible for enabling and disabling
# PF via -E and -X as documented in pfctl(8). That will ensure that PF
# is disabled only when the last enable reference is released.
#
# Care must be taken to ensure that the main ruleset does not get flushed,
# as the nested anchors rely on the anchor point defined here. In addition,
# to the anchors loaded by this file, some system services would dynamically
# insert anchors into the main ruleset. These anchors will be added only when
# the system service is used and would removed on termination of the service.
#
# See pf.conf(5) for syntax.
#
#
# com.apple anchor point
#
scrub-anchor "com.apple/*"
nat-anchor "com.apple/*"
rdr-anchor "com.apple/*"
rdr-anchor "forwarding"
dummynet-anchor "com.apple/*"
anchor "com.apple/*"
anchor "forwarding"
load anchor "com.apple" from "/etc/pf.anchors/com.apple"
load anchor "forwarding" from "/etc/pf.anchors/local-appengine"

I was referred to the feature request form for App Engine as part of a support ticket, and hadn’t seen a link to it previously. It may be useful to others, though I think it’s app-engine specific, not general to all products on the Google Cloud.

If you’re on Google App Engine and you are looking for a way to do some work over a large set of data in the datastore, there’s a good chance you’ll turn to App Engine Mapreduce. Unfortunately the UI for this tool leaves something (much) to be desired.

The control screen looks something like this after you’ve run a few jobs, especially if you are running pipelines that have a lot of sub-pipelines. All of this is a pain to clean up, as you have to click cleanup next to each entry, and it even annoyingly prompts you with a dialog for each one.

To resolve this issue, you can just delete the data in the datastore directly. Below is a code snippet which you can run through some sort of endpoint to delete the old data:

The function defines expando versions of the models the mapreduce library uses so that you don’t have to worry about crazy imports, and then just goes through and deletes all the entities for each type.

I just spent the last day fighting this issue, so I thought I’d post the problem and solution for anyone else who is fighting with it.

Docalytics is building an Outlook plugin to tracked attachments in Sales emails using VSTO (Visual Studio Tools for Office) and we are using ClickOnce for the deployment so that we can get automatic updates. Everything was going swimmingly until I was trying to test the installation. When running a copy of the installer locally the publisher was listed as “Unknown Publisher” even though we I was signing the ClickOnce manifests with a certificate from a trusted authority (COMODO RSO Code Signing CA). When trying to install it from the web, it was also behaving like the manifests weren’t signed, giving me errors like the following:

Customization URI:
Exception: Customized functionality in this application will not work because the certificate used to sign the deployment manifest for Docalytics for Outlook or its location is not trusted. Contact your administrator for further assistance.
************** Exception Text **************
System.Security.SecurityException: Customized functionality in this application will not work because the certificate used to sign the deployment manifest for Docalytics for Outlook or its location is not trusted. Contact your administrator for further assistance.
at Microsoft.VisualStudio.Tools.Applications.Deployment.ClickOnceAddInTrustEvaluator.VerifyTrustPromptKeyInternal(ClickOnceTrustPromptKeyValue promptKeyValue, DeploymentSignatureInformation signatureInformation, String productName)
at Microsoft.VisualStudio.Tools.Applications.Deployment.ClickOnceAddInTrustEvaluator.VerifyTrustUsingPromptKey(Uri manifest, DeploymentSignatureInformation signatureInformation, String productName)
at Microsoft.VisualStudio.Tools.Applications.Deployment.ClickOnceAddInDeploymentManager.VerifySecurity(ActivationContext context, Uri manifest, AddInInstallationStatus installState)
at Microsoft.VisualStudio.Tools.Applications.Deployment.ClickOnceAddInDeploymentManager.InstallAddIn()
The Zone of the assembly that failed was:
MyComputer

This error was taken from the event log, but a similar (if not identical) error was in the details of the failed installation dialog.

I’ve had fun over the past few days tracking down a problem where D3 Transitions weren’t working correctly. Everything looked right and I was pulling my hair out trying to figure out why the transition didn’t get invoked. Copying the code in question to a separate page (in isolation) showed that the transitions worked fine, so I figured it must be a conflict with something else on the page.

After a couple hours of deleting things from the page (it’s tough to pull things off because of the tree of dependencies) I figured out the problem was Datejs. A little googling confirmed it. What made this challenging was that there wasn’t any errors from the conflict. It just didn’t work.

I’m not clear on what the cause of the problem is (I had already lost enough time), but I ended up switching everything to moment.js. Datejs looks like it’s been dead since 2008 anyway.