Starting to combine the panel and the table/recordset. It’s going pretty smoothly. I’m still using the data array, but I’ll move to more form-based data tomorrow or wednesday. Nicely, the height of the panel resizes based on what’s in it. And I should be able to use the same module for the login form.

Obligation and Commitments from FACTS are cumulative. Actuals are non-cumulative. Remove all the calculations summing up obligations and commitments by monthly and store monthly values from Cognos.

IE7 works with YUI!

Need to add search to “View Financial Data”. Rather than showing all the subelements, clicking on a row in Requisition Information (top grid) will show the associated detailed financial view. We should also add a search to Req Info. The summary always shows.

Create historical record Cognos data table. We can use scripts later to integrate data into our reports…

Finished chapter 2, starting on chapter 3. Also got timers working from the gallery. One search, one bug, one additional search.

Unlock project is unlocking multiple projects locked by the same user(?)

Trying out YUI on the integration server…. Success! Take that, IE7! Ok, maybe not. Just realized that I only ran the testes using the browser that’s on the server, which is FF. Need to try running on a client box with IE tomorrow. The most important example to run will probably be add_capability.html.

Working my way through the YUI cookbook. In the element_classes example (Recipe 2.2), there is a call to Y.log(). It turns out that there is support for a logger window. I found this by Googling “yui Y.log” The top result was http://yuilibrary.com/yui/docs/console/. Which was pretty much what I needed. Using the documentation as a guide, I was able to integrate it, including giving it a global scope. And then, after a few more minutes, I was able to add draggable behavior by adding “dd-plugin” to the YUI().use() arguments and them to the Y.Console.plug() arguments. For future reference, here’s the total code required for the console, with three lines of test code (bracket the code with <body class = “yui3-skin-sam”>):

Need to add a dialog that shows the amount of time estimated for the query (average? Min/max?)

Need to be able to get budgets with respect to Capability, Reqs or Projects depending on the context

Need to be able to click on a query result line and go to the project. If multiple projects fit the criteria, all are listed in a dialog and the desired can be selected from that

Add tooltip to save/load query datagrid

Queries by individual capability – underfunded

Queries by individual appropriation – underfunded

Add the name of the query to the QueryBuilder titlebar

Incorrect total for committed. May be from COGNOS

Add a Requisition Amount column

Adding a column (Appropriation?) breaks the query

Incorrect Obligation Outlay goal (Runs on the budget center total rather than the Req total)

11:30 – 6:00 FP

Started to run analysis on the Phantom and Headset results. The Phantom results are promising, but not significant. I need to get more data and/or figure out a way of pulling off confounding data.

The headset data is somewhat more straightforward, if not a little dull. It turns out that in the pilot study, the ability to determine location of voice or tone with a 4-speaker headset is not that accurate, and degrades pretty linearly as the number of source go up. Not exactly an earth-shattering result.

Need to get the spaces out from the csv output. THe space causes the Open Office Calc to read in the data as a string and not a number.

Need to cause the headset app and the Phantom app to exit after writing out the file. Restarting ensures that the recorded data will be clean. There was a bug where old data was being kept between sessions on the headset code that subsequently had to be filtered out by hand.

Ran Dimitri through the Phantom for his second pass. Looking at how he handled the system makes me thing that there is an “expert” level of training that most of the other test subjects haven’t attained. I wonder if that makes the results more clean? Need to examine further.

Got the sensor resistance converted to voltage. A 1k ohm resistor seems to work best, since I want most of the sensitivity to be light pressure. Next, build a circuit with three channels that connects to the Phidgets voltage sensor. The only thing I’m wondering is if I’ll get the resolution with the voltage range I’m getting – Zero to about 2.5 volts. I’m estimating that I should get about 1500 – 3000 steps out of that, assuming -30v to +30v is resolved to a (unsigned?) 16-bit int.