The amazing adventures of Doug Hughes

Archive for November, 2007

I have been working on a Flex project that will use ColdFusion on the backend. Earlier this week I ran into a little problem that apparently has been reported in the past, but I was unable to find any information ahout the issue. In a nutshell, when using ColdFusion on the backend of a Flex application, ColdFusion will not (cannot?) resolve a ColdFusion mapping for any code in the psuedo-constructor, or in methods called from the psuedo-constructor of a CFC. This only seems to affect multi-server installations of ColdFusion and a bug report has already been filed.

Let me explain a bit more what’s going on. If you’d like to play along at home, grab the sample code You can place the files anywhere you like, but the ‘com’ directory must be put into the web root. Also, you need a ColdFusion mapping (it doesn’t matter what it is called or where it points to, but in the sample it is using a mapping for ColdSpring) Here is the code for test.cfc, if you are using a different mapping for your test, change ‘/coldspring’ to whatever mapping you wish to use.

Just to explain the code real quick, there are 2 methods, setup() and test(). In each of these methods, the value of expandPath(‘/coldspring’) is set to a variable, variables.setupVal and variables.testVal respectively. Also, in test() we return a string which outputs the values of these 2 variables. You will see that in the psuedo-constructor, we are calling setup(), so setup() is run everytime the CFC is instanciated.

When you instantiate this CFC in a ColdFusion page (such as text.cfm in the sample code) and then call the test() method, you will see that the values of expandPath(‘/coldspring’) are the same, and should be the path set for the ColdFusion mapping you are using.

setup = c:mypathtoColdSpring
test = c:mypathtoColdSpring

Now, if you run remotingTest.html, which is a flex application that calls the same method on the same CFC, you will probably get results that look similar to this:

setup = c:pathtosamplecodecoldspring
test = c:mypathtoColdSpring

When using the same ColdFusion code from a Flex app, setup() has a different value for expandpath(‘/coldspring’) (remember, setup() is called when the CFC is instanciated). Specifically, instead of resolving the ColdFusion mapping, ‘/coldspring’ is merely appended to the path of the web root where the application is running. The somewhat unexpected result was that in test(), the ColdFusion mapping is resolved and the correct, or expected, value of expandpath(‘/coldspring’) is returned.

If you get the same results in test.cfm and the Flex application, I’d bet you are running a stand alone install of ColdFusion. I have also been able to reproduce this in ColdFusion 7 and ColdFusion 8. A work around for this would be drop whatever mappings you need into your web root, but in a lot of cases that might be impractical.

Run the sample code and let me know what your results are, include what OS, web server and version of CF you are using.

This would allow us to handle any arbitrary number of queries in a request by handing the control of queries over to a named transaction… it allows us to create transaction objects, have multiple simultaneous transactions, and, well, in general it gives us near total control over our transactions.If you like it, please leave a comment so Adobe can see the support? Blog links to this post if you like the idea too? Thanks!

An interesting news item from cnet this morning: the British Government has fessed up to losing “two discs containing the details of everybody in the U.K. who claims and receives child benefits.”cnet story here

While it never specifies what kind of disks these were, I’m assuming CDs or DVDs. It’s interesting to note that, according to the article, the disks (or possibly the files they contain) were password-protected, but not encrypted. They were supposed to be couriered from one location to another for auditing and never arrived. So a second set was sent and that set did arrive. The scariest part was that the higher-ups didn’t even hear about the incident till 3 weeks after the fact.

Now, I’ve never dealt with 2 disks full of people’s Social Security Numbers, social security account balances, etc., but I’d like to think that if I lost one I’d have enough courage to fess up ASAP. On the other hand, knowing the magnitude of just such a mistake, I tend to think anyone would try to collect at least one more paycheck before moving into their cave down by the Thames.

On the other hand, the pollice and other officials are saying there’s been no sign of fraud, so there’s no reason to assume these disks have fallen into the wrong hands.

In any case… if you made a mistake of this magnitude, would your first response be to bring it to someone’s attention, or would your first response be to bury it? Be honest… the world is watching. 😉

In two previous blog posts I had touched on the subject of Data Warehousing and how and why that differs from the sort of Relational Database Management Systems (RDBMs) that we are now all so used to using. In the comments I got from one of those posts was a request to expand a bit more on the subject of “Normalizing” data. Normalizing data is at to root of RDBMs database structures. In my opinion it is far more art than science and there is rarely a 100% correct way to normalize data.

Over the years I have found to my eventual pain that over-normalizing data leads to diabolical SQL statements whenever there is a need to report on that data. So then, what is normalizing, why is it done and do you really care? I found a very interesting post by some IBM engineers who were around at IBM when the whole subject of normalizing data and RDBM’s was first tossed around. This was back in the 1980’s and I had always thought that this emerged because of a perceived need to reduce data duplication, like this.

If you have 1,000 employees in the same location and you want each to use the location address why have it in an employee table 1,000 times? Create an “employee” table and a “location” table, give each employee a unique identifier and each location a unique identifier and “relate” the two together (the relating bit is where Relational Database Management System concept comes in). At a basic level you could put a locationUID column in the employee table and simply insert the UID from the location table in that column. The great thing then is that any address changes need only be made once, in the location table and can apply to all 1,000 employees immediately.

Despite this obvious advantage it turns out the major reason for RDBM’s was more in response to databases getting larger and a need to minimize or reduce resource use (CPU cycles, memory etc.) as both were significantly and relatively more expensive at that time back in the 80’s. One other key set of issues which changed dramatically since the 80’s is the typical database size in terms of numbers of columns etc.

At that time data was often entered via punch cards, an 80 column punched card averaging 20 fields per record was typical. In today’s RDBMs it is not uncommon to see hundreds of columns in thousands of tables in a single RDBMs database. It is this which often necessitates the diabolical SQL with over gratuitous levels of joins in order to run simple reports. Hence my evolving series on the needs for Data Warehousing… Data Warehousing Part 1 OLAP and OLTPData Warehousing Part 2 Dimensional Modeling

Would you risk driving at speeds up to 100mph (on the Autobahn of course ;o) for a distance equivalent to driving from Los Angeles to New York (2,500 miles) in a concept car which has only been driven across the car park – oops parking lot (my Britishisms got the better of me!)? The answer is you should not if you want to arrive safely and in a reasonable time-frame. Yet I have seen exactly that analogous situation many many times web applications get rolled out before anyone has made sure that they will actually work with more than 1 user.

As I have mentioned a few times in my blog posts, I am fortunate to have spent the past 8 years traveling to many parts of the USA and other parts of the World looking at ColdFusion and JRun applications.Some of the things I have found are mind-blowing in the sense that they actually ever reached Production, which they all did. Here are a few examples.

Scheduled Tasks being instantiated in the Application.cfm file with no conditional code. The result being that tasks were being instantiated as they were running with every single request, the result was a very unstable application.

A JRun Java app where there were debug writes to the std-out log throughout the code in Production. The std-out log grew so fast it was virtually unreadable after 30 minutes. The log was growing by 3gb per hour.

NIC’s and Switches both set to autosense, the result being that all devices in the infrastructure, Web-CF Servers, Database Servers etc were pegged at 10mbps non-duplex. When I arrived on site the client had a 12 Web Server, 3 DB Serverfarm up that was dying at 500 concurrent users and needed to support 5,000 concurrent users. That site currently supports over 6,500 concurrent users with 6 Web CF servers after we fixed the settings. So it is not always code that causes issues.

My main point here is that we should never use a Production environment or put anything there until everything has been very thoroughly tested. In the cases above, all these serious issues would have been discovered with a proper code deployment regime which includes testing everything and I do mean everything; first. There was a previous blog post I made relating to testing code before deploying it.

This is just a quick blog entry to hopefully help other people who end up in a similar situation as I did this morning. Specifically, Ive been contributing to a new project and when to commit my updates into subversion this morning only to receive this lovely error:

Now, despite this errors clarity and obviousness, it took my team about an hour and half to work out.

A check of the configuration files on our Subversion server confirmed that I was in a group which had read and write access and that others on this project were quite able to commit to the repository.

Also, I was able to commit to other repositories using the exact same permissions. I mean, literally, the groups and rights defined were the same.

I could checkout from the repository with no problems as well.

The log files were a mess of uselessness too.

A Googling of the problem turned up many cases where users had upgraded from earlier versions of Subversion to the latest, which, in fact, we recently did. However the repository in question was a new one and had never existed in previous installations. Nothing else we could find seemed to have any bearing on the problem.

To make sure it wasnt a bug in Subclipse I went to a different machine and checked out and tried to commit some changes to the same effect.

After quite a while we tried something that seemed to us to be a longshot. The casing in my svn url was not exactly the same as the repository name on disk. For example:

In case you were not aware, Alagad is giving away free tickets to CF.Objective() to the first five registrants for our already very affordable Model-Glue training. We still have three tickets avaliable and the offer is only good through today. If you’re already planning to attend CF.Objective() and you want some Model-Glue training, then you should seriously consider this offer!