No problem. The snippets below are tailored to a batch processing workflow for your personal use (not a production environment ), but hopefully provide enough scaffolding to build on. They're written with Python 2.7 and Requests but should port to another http lib with minimal effort.

Get a consumer key/secret from connect-integrate

First, reach out to connect-integrate@trimble.com if you haven't already to request application credentials on Trimble Identity. This will give you a service account you can use to ping Connect and retrieve your TODOs from a script. To follow this workflow exactly, ask connect-integrate to enable the "client credentials" grant type for your application. What you'll get back will be a "Consumer Key" and "Consumer Secret" (strings of seemingly random characters) that you can then use for authorization.

Stash the consumer key/secret in a safe place

Keep the consumer secret...secret. This is important because the key+secret will allow scripts to act on your behalf and access your Connect project data. Don't put the key+secret in a version control system (e.g., git) and try to avoid hard-coding them in your Python files if you can. Storing the key and secret as environment variables on your machine is one option. On Windows, you can follow the steps here, putting the key in a variable called TCKEY and the secret in a variable called TCSEC. This way you could safely share your script with colleagues (assuming they have their own key+secret). We'll see more about how this fits together below.

Get the Requests library

If you don't have it already, install the Requests library to follow along.

Import required libraries

For this work we'll only need three libraries: Requests (for HTTPS requests to Connect), base64 (for encoding the key/secret in requests), and os (for fetching the key/secret from environment variables).

import base64

import requests

import os

Load the client_id

If we've stored our application key/secret in environment variables, we can load them into the script and (base64) encode them in a format Connect can understand:

The first step in our workflow is to exchange our encoded application key/secret (the "client_id") for an "id_token" on Trimble Identity. This will authorize our script to execute requests on our behalf.

In words: "Provide a procedure step1_id_token that takes an encoded client_id and makes a POST request to Trimble Identity requesting an id_token using the client credentials grant type. If we receive one, return it. Otherwise, raise an exception."

You should be able to load the code created so far into a Python interpreter, execute step1_id_token(clien_id) and see a long strong of characters as the procedure's return value. This is the id_token we can use to begin talking to Connect directly.

Exchange ID Token for a Connect token

Now that we have an id_token, we can exchange it for a Connect token. To do this, we define a new procedure, step2_tc_token, that takes an id_token from step1 (above) and returns a Connect token (a string) we can use to sign all subsequent requests to Connect. An exception is raised if the request can't be completed.

With our Connect token in hand, we can begin transacting the Connect data we care about. We could define a series of procedures for interacting with the Connect API that follow a common pattern: given a "header" with our Connect token, execute the request and return the (possibly postprocessed) result. Here's what that pattern might look like applied to a procedure that can fetch all of our projects:

def projects(headers):

r = requests.get("{0}/{1}/projects".format(tc_pod, tc_api),

headers = headers)

r.raise_for_status() return r.json()

Once you've got a list of projects, you could look up the project of interest by name, select its ID, then use the todos endpoints to fetch all the TODOs associated with the project.

Put it all together

Now we can put all the components we've built into a top-level procedure that defines our business problem. This would be the "entry point" of the script to be run every time it is invoked. In this case, we're just printing out the name and ID of all projects we have access to:

def workflow(client_id):

token = step2_tc_token(step1_id_token(client_id))

headers = {"Authorization": "Bearer {0}".format(token)}

for p in projects(headers):

print "{0}, {1}".format(p['name'], p['id'])

To test, load the script and execute workflow(client_id), where client_id is the base64 key and secret created in the "Load the client_id" step.

We can continue to build out this workflow by adding new procedures that take a header, make a request, and return a result. Then we can glue them together in the top-level workflow, building the Authorization header at the start of the process and passing to each procedure.

Things to watch out for

Some Connect endpoints are particular about the content-type you provide them. If a request seems to be failing, try adding "Content-Type":"application/json" to the header for the request.

Some endpoints may return "partial content" for resources that are too large to be returned in full. You should see this reflected in a HTTP 206 response code. If you need all the data (say, a list of all the files in a massive directory), use the Range header to loop over the content in batches and aggregate the results. Check the API docs for more details here.

Hope this helps -- Devon

PS: see attachment for a printout of all the source code we developed here.