Post navigation

More than 5 years ago I wrote an article on how to Mass delete Picklist values in Salesforce, this is still my most visited article and I have been meaning to get back to it for years. At least now it seems like it will be a part of the standard functionality sometime in the near future (mass delete picklist values (setup)) but today I had to do this at a customer so I had to solve it once again.

I tried using my old script but ended up with errors, Lightning doesn’t like loading external JavaScript things into itself. I respect that and switched to Classic, this is a Sysadmin Only exercise anyways.

Aura Content Security Policy directive

It works out of the box in Classic but it’s very quiet about what’s happening and you don’t really know what’s happening. Also if you accidentally clicked the bookmark in a production environment you’re going to have a bad time

I deleted 350 picklist values in just over 2 minutes so it will not take forever, next step for this script would be to add a spinning progressbar and some bells and whistles but for now this at least solves the problem.

Since you need to have at least 1 value in a Value Set it will not be able to delete all of the Active picklist values but at least you'll save some mouse clicks.

You'll get a warning in the browser that synchronous XML requests are deprecated:

Running the requests asynchronous works fine but the browser will be super swamped if you're deleting hundreds of picklist values so this one is better. When the day finally comes and support for synchronous XML requests are removed I'll make sure to update this but until then this hack is good enough.

Oh have I longed to write this blog post, ever since I bought a Steam Link for myself as a christmas gift I’ve been wanting to make use of it. I’m the kind of person who sometimes (a bit too often) buys stuff first and motivates the purchase later (sometimes with a bit too much infrastructure).

Back in February I gave it a try but never got it to work, I wasn’t able to ping my local machines from my EC2 machine over my OpenVPN tunnel. This confused me a lot and I left it for a while. Tried again last week and got it to work, the magic was that since I’m running my Pfsense instance in VMWare I had to set my network card in promiscuous mode (yes it’s called that and it means basically that it sends packets everywhere).

After network card was in promiscuous mode everything just worked out, I downloaded a couple of games and when I started a Steam client on my local network it just said that I could start streaming from the Windows machine I had in EC2.

In the blog post above the connection is made from your local machine to EC2 but I’m doing it in the other direction so I’m going to explain that in more detail here. Also since the premade EC2 Gaming AMI is a couple of years old I had to update Windows, Steam and Nvidia drivers but I’ll go through that too.

EC2

These are the steps needed to get the machine up and running in EC2, refer back to the original blog post for details.

Launch the ec2gaming machine in EC2 as a g2.2xlarge spot instance, this is documented in the blog post already. I create a Security Group with full access for my public IP address, you can of course be more restrictive by only allowing RDP.

Uninstall OpenVPN (from the Start menu) and download a newer version from here. Don’t install OpenVPN Service, it’s not needed.

Now is the time to take a snapshot of the machine since a spot instance is always terminated when you turn it off. You can do this manually from the AWS Console or using the gaming-down.sh script as described no the blog, if using the scripts in the future it’s a good idea to create an IAM user with limited access since the credentials are in clear text in the script.

I’ve created a pretty narrow policy for the IAM user that runs gaming-up.sh and gaming-down.sh

Pfsense

I’m using Pfsense at home instead of a normal router, it runs in VMWare ESXi (5.1 at the moment but upgrade is coming) and works like a charm. I will not go into details about Pfsense since I assume if you’re reading this you are kind a geek anyways. Follow the steps below to set up a OpenVPN server in Pfsense that your EC2 machine can connect to.

Create the OpenVPN server according to these settings, instead of using screenshots I printed my configuration page as a PDF. Most of it is standard and it’s all described in the blog post about the stretched LAN

Go to Interfaces / Interface Assignments and assign the aws-lan-bridged Network port as OPT1 or whatever name you like

The firewall will probably have created some rules for you OpenVPN server so might not have to create the ones for the inbound traffic (WAN port 1194) but create the other rules as described in the blog post.

Create the Bridge as described (it should consist of LAN and OPT1

That’s what you need on the Pfsense side of things but if you’re like me using VMWare as a hypervisor you will need to do 1 more thing as I found here after serious Googling why I couldn’t reach my internal network from EC2.

Login to your ESXi and from the command line you need to issue a command kind of like this:

Create a file called client.ovpn and on your Windows Server then right-click on the OpenVPN GUI taskbar icon and chose Import file…

Right click the OpenVPN GUI icon again and you should have a menu option client and under that connect. Chose connect and you should be connected to your LAN.

Steam

We left the fun stuff for last, open Steam and login with your credentials and make sure it’s configured for streaming, this is described in the first blog link. On your local network your other Steam client(s) should pick up that there’s a new device available for streaming.

Boot up your Steam Link and enjoy gaming!

Beware of shutting down the streaming server from Steam Link, this will terminate the instance since it’s a Spot Instance.

Last week a client asked me to help out, we had been creating a system that creates PDF files in Salesforce using Drawloop (today known as Nintex Document Generation which is a boring name).

Anyways, we had about 2000 PDF created in the system and after looking into it there doesn’t seem to be a way to download them in bulk. Sure you can use the Dataloader and download them but you’ll get the content in a CSV column and that doesn’t really fly with most customers.

I tried dataloader.io, Realfire and search through every link on Google or at least the first 2 pages and I didn’t find a good way of doing it.

There seems to be an old AppExchange listing for FileExporter by Salesforce Labs and I think this is the actual software FileExporter and it stopped working with the TLS 1.0 deprecation.

Enough of small talk, I had to solve the problem so I went ahead and created a very simple Python script that lets you specify the query to find your ContentVersion objects and also filter the ContentDocuments if you need to ignore some ids.

My very specific use case was that I was to export all PDF files with a certain pattern in the filename but only those that were related to a custom object that had a certain status. Given that you can’t do certain queries like this one:

SELECT ContentDocumentId, Title, VersionData, CreatedDate FROM ContentVersion WHERE ContentDocumentId IN (
SELECT ContentDocumentId FROM ContentDocumentLink where LinkedEntityId IN (SELECT Id FROM Custom_Object__c))

It gives you a:
Entity 'ContentDocumentLink' is not supported for semi join inner selects

I had to implement the option for the second query which gives a list of valid ContentDocumentIds to include in the download.

One more thing, keep in mind that even if you’re an administration with View All you will not see ContentDocuments that doesn’t belong to you or are explicitly shared with you. You’ll need to either change the ownership of the affected files or share them with the user running the Python script.

Long time no blog post, sorry. I have meant to write this post forever but I have managed to avoid it.

Anyways, consider the scenario when you sit in your couch and you wonder:
– “What’s the temperature in my fridge?”
– “Did I close the door?”
– “What’s the humidity?”

You have already installed your Electric Imp hardware in the Fridge (Best Trailhead Badge Ever) and it’s speaking to Salesforce via platform events, you even get a case when the temperature or humidity reaches a threshold or the door is open for too long.

But what if you just want to know the temperature? And you don’t have time to log into Salesforce to find out.

So now we have big objects being created but the only way to see them is by executing a SOQL query in the Developer Console (SELECT DeviceId__c, Temperature__c, Humidity__c, ts__c FROM Fridge_Reading_History__b).

I have created a Lightning Component that uses an Apex Class to retrieve the data.

Lets start with a screen shot on how it looks and then post the wall of code

I first heard about Big Objects in a webinar and at first I didn’t really see a use case, and it was in BETA so I didn’t care that much but now that it was released in Winter ’18 everything changed.

My favourite Trailhead Badge is still the Electric Imp IoT one and I had thought it would be fun to store the temperature readings over a longer period of time. Since I run my integration in a Developer Editions I have 5 MB of storage available, this is not that much given that I receive between 1 and 2 Platform Events per minute.

Most objects in Salesforce uses 2 KB per object (details here) so with 5 MB I can store about 2500 objects (less actually since I have other objects in the org).

Big Objects gives you a 1 000 000 object limit so this should be enough for about 1 years worth of readings. Big Objects are meant for archiving and you can’t delete them actually so I have no idea what will happen when I hit the limit but I’ll write about it then.

Anyways, there are some limitations on Big Objects:
* You can’t create them from the Web Interface
* You can’t run Triggers/Workflows/Processes on them
* You can’t create a Tab for them

The only way to visualise them is to build a Visualforce Page or a Lightning Component and that’s exactly what I’m going to do in this blog post.

Archiving the data

Starting out, I’m creating the Big Object using the Metadata API. The object looks very similar to a standard object and I actually stole my object definition for a custom object called Fridge_Reading_Daily_History__c. The reason why I had to create that object is that I can’t create a Big Object from a trigger and I want to store every Platform Event.

The Fridge_Reading_Daily_History__c has the same fields as my Platform Event (described here) and I’m going to create a Fridge_Reading_Daily_History__c object from every Platform Event received.

Keep in mind that after you have created it you can’t modify that much of it so you need to remove it (can be done from Setup) and then deploy again.

In my previous post I created a Trigger that updated my SmartFridge__c object for every Platform Event, this works fine but with Winter ’18 you can actually create Processes that handles the Platform Events so I changed this. Basically you create a Process that listens to a Fridge_Reading__e object and finds the SmartFridge__c with the same DeviceId__c.

This is what my process looks like:

I added a criteria to check that no fields were null (I set them as required on my Fridge_Reading_Daily_History__c object)

Then I update my SmartFridge__c object

And create a new Fridge_Reading_Daily_History__c object

So far so good, now I have to make sure I archive my Fridge_Reading_Daily_History__c objects before I run out of space.

After trying different ways to do this (Scheduled Apex) I realised that I can’t archive and delete the objects in the same transaction (it’s in the documentation for Big Objects) and I don’t want to have a scheduled job every hour that archives to Big Object and then another Scheduled Apex job that deletes the Fridge_Reading_Daily_History__c that have been archived.

In the end I settled for a Process on Fridge_Reading_Daily_History__c that runs when an object is created

The process checks if the Name of the object (AutoNumber) is evenly divisible by 50

This class will call the 2 future methods that will archive and delete. Yes they might not run in sequence but it doesn’t really matter. Also you might wonder why there’s a (callout = true) on the futures. I got an Callout Exception when trying to run it so I guess that the data is not stored inside Salesforce but rather in Heroku or similar and it needs to callout to get the data (I got the error on the SELECT line).

Share this:

I have been playing around with Einstein Analytics (the thing they used to call Wave) and I wanted to automate the upload of data since there’s no reason on having dashboards and lenses if the data is stale.

Yes the logging is a bit on the extensive side and make sure to add these environment variables in AWS Lambda:
SF_USERNAME - your SF username
SF_PASSWORD - your SF password (encrypted)
SF_SECURITYTOKEN - your SF security token (encrypted)
FILE_BUCKET- the bucket in where to find the mapping file
METADATA_FILE_KEY- the path to the metadata file in that bucket (you get this from Einstein Analytics)
WSDL_FILE_KEY - the path to the wsdl partner file in the bucket

I added an S3 trigger that runs this function as soon as a new file is uploaded. It has some issues (crashing with parenthesis in the file name for example) so please don’t use this for a production workload before making it enterprise grade.

As an avid trailblazer I just have to Catch ‘Em All (Trailblazer badges) and the project to integrate Electric Imp in my fridge was a fun one.

Build an IoT Integration with Electric Imp

After buying an USB cable to supply it with power it now runs 24/7 and I get cases all the time, haven’t really tweaked the setup yet.

I have looked at the new Platform Events and I thought that this integration can’t be using a simple upsert operation on an SObject, it’s 2017 for gods sake! Said and done, I set out to change the agent code in the Trailhead project to insert a Platform Event every time it’s time to update to Salesforce.

First of all you need to define your platform event, here is the XML representation of it:

In Winter ’18 you can use process builder on Platform Events but my developer edition is not upgraded until next Saturday.

So I made things a bit more complex by introducing Platform Events and a trigger but I feel better knowing that I use more parts of the platform. Next step will be to use Big Objects to store the readings from the fridge over time and visualize them.

Share this:

One common task when integrating Salesforce with customers system is to import data, either as a one time task or regularly.

This can be done in several ways depending on the inhouse technical level and the simplest way might be to use the Import Wizard or the Data Loader. If you want to do it regularly in a batch fashion and are fortunate enough to have AWS infrastructure available using Lambda functions is an alternative.

Recently I did this as a prototype and I will share my findings here.

I will not go into details about AWS and Lambda, I used this tutorial to get started with Lambda functions but most of it didn’t concern the Salesforce parts but rather AWS specifics like IAM.