Creative Health Labs

Pages

Saturday, February 17, 2018

I'm an idiot, so this took about 6 hours for me to figure out, but apparently many people have not been able to figure this out, and I could not find a complete and working solution online anywhere.

If you want to be able to upload a file to Dropbox from Zoho Creator, create a function like below(here I called the function "Dropbox", under namespace "Create"). In the function, the ID is being passed to it from the underlying form entry.

... where XXXXX is your secret Dropbox Token, and YYYY is the url of the file you are uploading, and FFFF is the Dropbox folder to which the file is being uploaded, TTTT is the desired filename, and XXX is the desired extension.

This uses v2 of the Dropbox API. You can get your Dropbox token and use the very helpful Dropbox API Explorer here:

https://dropbox.github.io/dropbox-api-v2-explorer/#files_save_url

The key thing that was difficult to figure out and is not well documented is that the function toString() is necessary because Dropbox is expecting JSON and although the variable "data" above looks like JSON, it is not.

Sunday, October 22, 2017

Monkeylearn.com is an awesome tool to help noncoders perform machine learning algorithms. The challenge is that the regular Monkeylearn integration in Zapier does not give you all the API data that you may want. For example, the Extract tool only gives the extracted keywords, but not the relevance, nor the position in text.

So, if instead of using the Monkeylearn zap in Zapier you build your own custom POST request, then you can get all the regular API output in JSON format. Make sure to choose "Custom Request" in the Webhooks by Zapier zap, NOT "POST". Once you are in the "Custom Request" zap, then choose POST as your method.

Monday, September 25, 2017

This is really interesting. Elon Musk has another company called OpenAI, an artificial intelligence company. OpenAI built a neural network that was designed to teach *itself* how to play a very difficult video game and then the AI played against the best players in the world on stage in front of millions of people.

What’s really interesting is that the AI used strategies that the humans had never considered and could not even initially understand.

The implication for the ER is really interesting. Optimizing ED workflow is really a “game” in the same sense - you have two teams (providers/nurses/techs vs patients), with limited resources and functions, and each side has specific goals (for patients it is to get out quickly and have their healthcare issue resolved, for hospitals/doctors it is to keep the patient alive and get them out as quickly as possible by using the fewest resources).

Imagine if somebody created a video game that mimicked the people, forces, goals, and parameters of the emergency room, and then created an AI bot to play it, in order to figure out the best strategy to "win." That strategy could then be studied and learned from by humans.

For those who are naive to the challenges of Emergency Medicine, here are some of the huge problems in our specialty:

1. How should one staff?

How many doctors and APPs (Advanced Practice Providers, i.e., physician assistants) should you staff and at which hours? It depends on the volume and acuity of patients coming in, but you can't predict that accurately enough, so your best guess is usually based on averages seen in previous months and years.

However, if you staff for the average, you'll be understaffed 50% of the time (putting people's lives at risk), and overstaffed 50% of the time (wasting money).

2. Should you have more doctors, or more APPs, and in what ratio?

APPs are less expensive than doctors, but need oversight and have a reputation for ordering too many tests (although there are plenty of doctors who are guilty of that as well).

3. Should you pick up the next new patient, or discharge one of the patients who is ready to go?

This is a fascinating question that requires knowing a few other variables to solve properly (Single coverage site? Acuity of the next patient? Other doctors twiddling their thumbs?)

Sunday, September 11, 2016

This is pretty cool. Medicare/CMS posts a lot of their data in two main websites for public consumption:

Data.Medicare.gov
Data.CMS.gov

I don't know why these two are separated, but it does seem like they host two separate datasets of healthcare data.

One can connect Windows Excel to an OData feed from Medicare/CMS, but I can't figure out how to do it with Mac Excel '11 (I don't have a Windows computer). I prefer using Google Sheets anyways, so I wanted to find an easy way to connect Sheets to Medicare/CMS data.

Here's the technique I have found, and please let me know if there is a better way:

3. Click on "Export", then "Download", and then right click and save the link to the CSV file. Let me clarify: don't click the CSV file to download it, rather right click and save the LINK to the CSV file. It should be something like this:

Friday, July 15, 2016

I am happy to announce that my detailed post on how I built CMElog.org using Knack, Zapier, Dropbox, Mailchimp, Mandrill, and Google Drive has been published on the Knack blog:

In this post, I dive into details of how to use Zapier to build automated services, like CMElog.org, in which medical providers' continuing medical education credits can be automatically extracted, identified, copied for back up, organized, and accessed.

Tuesday, May 17, 2016

There is a wonderful TED talk by Shawn Achor in which he explains that most people have the success and happiness equation backwards. Whereas most people believe achieving greater success will result in more happiness, he argues that being a happier person will make you much more successful.
He goes on to say that happier and less stressed doctors are 19% faster and more accurate in diagnosing patients.

The first thing I did was to download the data from Medicare, and I extracted only those providers who are in Emergency Medicine. For EM providers, Medicare publishes for how many cases of levels 3, 4, 5 and critical care ("CC") each provider was reimbursed. These are represented by 99283, 99284, 99285, and 99291.

Critical care, 99291, is a very interesting one because it is up to the doctor's judgement (more so than the other codes which have very strict criteria) as to whether a case qualifies for critical care. There are, of course, guidelines, but still, it's mostly a judgement call and the physician is expected to write a brief explanation as to why the case qualifies. Critical care, of course, is reimbursed at a higher level than level 3 through 5 cases.

The Medicare data set does not directly provide a critical care rate (the percentage of a provider's cases that are billed as critical care), but it can be derived from the dataset. This is the number to look at, because if you are billing a significant higher rate of critical care than your peers, it can raise red flags about your billing practice.

To derive the rate of critical care billing, I simply divided the # of CC for each provider by the sum of the number of level 3, 4, 5, and CC cases. Levels 1 and 2 were ignored in this calculation because they seem to be missing or very sporadic in the dataset. Additionally, the vast majority of cases in EM are 3 and higher. So admittedly, there is a small bias in this calculation (because level 1s and 2s are not included), but the bias is uniform.

I dumped the data into Tableau and then organized the data by state, so that providers can see who is charging critical care the most frequently in their respective states.

Mind you, there are lots of caveats to this data. Some ER doctors, obviously, work with sicker patients. Others might be critical care fellows and do *only* critical care. And I'm sure there are other situations I don't know about. Nonetheless, this is the data that Medicare has published, so we need to be aware of it.

I then aggregated the data by state and calculated state-wide averages for critical care and displayed them using Tableau's map function: