An IDE gives you three very useful tools. Code editing. Code navigation. And code debugging. Each of the commenters, I am sure, have different needs and styles of these tools. You will also.

All the IDEs tend to have very aggressive code editing help. This is great when you are working with new packages but the help gets tiresome as your packages familiarity increases and yet the IDE continues to make suggestions when you know full well what you want. Code navigation in modern IDEs is a godsend. To be able to move easily between classes, up and down the inheritance hierarchy, and around usages. Code debugging is less about the source and more about the data and the threads. The better the tools are at showing data structures -- the plural is important! -- and visually connecting threads w/ stack traces the easier your job becomes.

I have not answered your question because there is no answer. We shape our tools and our tools shape us. It almost does not matter which you pick -- Eclipse, IntelliJ, NetBeans, Emacs w/ JDEE, Vim w/ JDE, .... What matters is that using the IDE becomes second nature to you.

And don't fool yourself that you will fix the IDE's problems. You don't have that much free time.

I work for a small publisher intermediary and while working with our data recently I needed a testing ISSN. It got me thinking about the work involved with getting an ISSN for my blog -- an online serial publication. There is one form to complete and that is then mailed to the Library of Congress along with the "front page" of the publication. I did this about two weeks ago and today the Library of Congress, United States, ISSN Center delivered my new ISSN. In all its glory, here it is

Over the last few days I have needed a few command line tools -- mostly for data cleanup. I got tired of manually parsing command line arguments. I wanted something more automated. My first thought was to design a data-structure that expressed the command lines arguments then write an interpreter that would parse the arguments with its guidance. I spent an hour doing this only to realize that preparing the data-structure was almost as much work as manual parsing. Then I remembered that Ant has a simple facility that uses reflection to execute a task expressed in XML against an object. This is what I wanted. So, for example, the command line tool

The magic, of course, is reflection. The reflection-based parser sees "--verbose" and matches it against setVerbose(), "--multiplier" matches against setMultiplier() and that it takes one numeric option. The remaining positional arguments are passed to addPositional() as numbers. Once parsed run() is called.

I was having a devil of a time yesterday with the simple task of using a Java program to copy records as objects from one MySql database to another [1]. I kept running out of memory. While the root problem had to do with creating 2M objects in memory it did lead to a better understanding of how MySql loads result sets. In short, it loads the whole result set into memory in one shot. So, if you have 1M records at 1K each in the result set you will need at least 1G of memory to hold them. If you then build 1M objects from these records you will need an additional 1M * object size of memory. In other words, a lot of memory. You can have the MySql JDBC driver "stream" the result set, however. That is, read the records row by row from the database. It is less efficient for the driver -- multiple trips back and forth between the server -- but doing so requires far less memory. You can turn on streaming at the statement level or at the datasource level.Statement Level

To turn on streaming at the statement level you need to use a set of common JDBC settings that, when used together, inform the driver to stream. When you create or prepare a statement you must define how the result will be used and what is the fetch size. For example,

For more information see section Result Set in JDBC API Implementation Notes. DataSource Level
To turn on streaming at the statement level you need to add a property to the JDBC uri. For example, Integer. MIN_VALUE is -2^31 and so use

An advantages to using JMX is that the JMX client gets to the service via a different network path than the service's users. When the service is running well the path taken does not matter much. It is often the case, however, that the main service path becomes inaccessible under adverse conditions. Your HTTP requests are not being serviced before timeouts kick in, for example. And, consequently, your monitoring is also inaccessible. JMX clients use RMI or direct socket pathways to connect to the service and so the JMX client can continue to monitor and manage the service.

As Mr Fisher says (first comment to the posting), JMX is one of the "golden parts" of the Java ecosystem. (JBose was built on top of it.) Current JMX coding practices are more sophisticated than in the early days. The "MBean" and "MXBean" interface suffix continue to support quick and dirty monitoring and publishing. And for those with lots of monitoring and management touchpoints we too use sophisticated Java annotations processing to turn existing code into touchpoints.

I have to say, again, I find gnuplot's dumb terminal support is so useful when you are at the command line and need to see a plot of some data. The plot is very rough but this is usually enough to give you enough insight into the data as to whether or not to continue to exploring it. The script I am using now is

I have been deeply troubled more recently by having been given a name to what I/we see happening. Our world's experiences are rapidly being shared "under glass" [1]. Be it phones, tablets, desktops, etc. We photograph our children and show the image under glass and not on paper. We hear a poet not in person but from under glass. The Call of Duty gets adrenaline rushing with only the risk of spilling your soda. Ever more of our worlds experiences are under glass. For kids under glass has become their first experience. For some it will be there only experience. You can look and listen but you can not touch. Experience as diorama. The outside framed. Technology advances and so we might just be at a low point in the development of experience+technology. I don't think this is the case. The soma of instant gratification to even the artificial is too strong a force. Umberto Eco talked about American's allure of hyper-reality. Under glass they can have it all.

In the posting Cloud-Powered Facial Recognition Is Terrifying a commentator noted that at 98% accuracy the facial recognition software would have 200,000 false positives per year for a typical airport. This is an inconvenience. The terrifying part is that you also have to consider the false negatives. Assume (for ease of calculation) that 1% of the population are terrorists. Within a population of 1,000,000 there are 10,000 terrorists. Within that, 2% of terrorists will not be recognized and 200 of them will be allowed to fly. Boom! It really doesn't matter what the real ratio of terrorists to non-terrorists is. What matters is the human costs of a false negative.

What if the big 5 are perceived as "magazines." That is, if you look at what the big 5 are doing it is little more than gaining an audience by offering remarkable content. When you buy a magazine you don't think about the conveyance, the staples or glue bound, high-resolution images on paper. The same will happen with tablets and other hardware. You pickup the "Vogue tablet" to read its curated content. And "Maker tablet" to read its. I am sure there will be national and international standards shaping a convergence of software and hardware as these "magazines" accumulate. Just as there is, today, an international standard for shower-curtain rings.

I attending Verner Vinge's lecture at URI on Tuesday night. He talked mostly about the art and mechanics of writing science fiction and a little about the "singularity." Like Ray Kurzweil, he sees the singularity as the appearance of super-human intelligence. He is a little more open, it seems to me, as to whether this is embodied within humans or wholly non-human. (As a side note, I like Jessie Schell's definition where the singularity is the point at which the future is unpredictable because the future changes come, to human perception, instantaneously.) My question to Vinge was

"What if the singularity is not centered on intelligence but instead on population. Nano beings that's population grows (near instantaneously) to consume all the planets resources."

Unfortunately, it was not one picked by the moderator and so I will have to answer this myself.

The following is a comment made within Facebook's walls that I want to share further.

At heart I am a socialist. I live in the USA but was raised in the UK. I firmly believe that it is our government's only obligation to protect its citizens, culture, and environment. To that end, education should always be available to all. Health care should be available to all. A functioning natural environment should be available to all. A robust and varied culture should be available to all. Everything else is a means to these ends.

But this is not the country we live in. Since the 1970's we have had a government ever more focused on "job creation". And this has been expressed time and time again as enabling corporate growth and capital gains growth. We have have 40 years of this messaging. That is 2 generations. The message is firmly planted in the citizen's psyche. The problem is that this is a lie. And worse, a bold faced lie if you just open your eyes and look beyond our territorial boarders. We are a insular people.

Since the height of US world dominance in the 1950s the US population has doubled. We had 150M citizens then and now we have 300M. Our focus has also changed from local solutions/problems to national solutions/problems. (We do, after all, have a national, homogenizing media.) A solution for 300M people is more than twice as difficult than one for 150M people. And, generally, whatever the solution is it is likely wrong. As Henry David Thoreau (kind'a) said "why do I care about the weather in Texas when I live in Rhode Island."

So, this is a long aside to my belief that we should not focus on solving our local problem at the national level. We can solve them locally for RI. There is no reason why RI can not have universal health care. There is no reason why RI can not have free higher education. There is no reason why RI can not give economic aid. RI has to make sure that this is all accumulated in an equitable way to past, present, and future citizens. But I feel this can be done.

"Boot and his colleagues say that none of the studies they examined avoided all of the methodological pitfalls, and that this raises doubts about the cumulative evidence that action video games enhance cognition. Boot stresses that the studies' claims are not necessarily wrong — but although the available evidence is promising, it is not compelling enough for researchers to draw strong conclusions legitimately."

To some degree the language does not matter. What matters is that the kid is ready for the abstract thinking required for programming and that he or she wants to program. My son, then 10, spent a week programming w/ Scratch. He enjoyed it but then abandoned it because there was no relationship between his life and what he could do within Scratch. If Scratch connected to the outside world (think Maker and Instructables here) he would have continued further (I think). Kids think of programming like drawing a specific picture or building a play structure. I do it now and I am done. For them, it is not an intellectual journey.

Basecamp does not have a means of listing links to external content. It does allow for uploading and so you can use this facility to have links to external content. I needed this for a small project that uses Basecamp to coordinate activities and Google Docs to hold content. The "link" to external content is encoded in an HTML document. For example,

Replace TITLE with the title of the content and URL with the URL to the content. (Don't forget to XML encode any HTML entities in the title and JavaScript escape any special characters in the URL.) Save this document to a file and then upload this file to Basecamp. When a user selects the uploaded file in Basecamp the user's browser will be redirected to the content at the URL. A lighter weight version of the HTML content is simply

The Quora question is Why do many programmers insist that non-graphical tools are superior to GUIs? and the simple answer is that programmers work extensively with the names of things -- machines, directories, files, packages, classes, methods, functions, variables, language statements, fields, tables, actions, commands, etc. And so any tool that that lets me use names as navigation to the named thing or things related to the named thing is preferable.

I like the posting Designing command-line interfaces. Being a CLI type developer I have implemented most CLIs with Anders' recommendations in my tools. Points I would like to emphases are

1) For long running commands make them verbose by default. For short running commands, like mv and cp, make them quiet by default.

2) While I like autoconf's use of "no" to turn off an option, eg --no-foos, and "=" to provide an option's optional value, eg --debug on default port or --debug=port for a specific port, it is not used enough elsewhere and so is too unexpected. Don't use them.

3) Always use a non-zero exit if you find an unknown option or an unknown positional parameter.

4) Use one-dash for one-letter options and two-dashes for multiple-letter options. Eg, -v and --version, -h and --help. There are many libraries available that help parse options. I tend to only use two-dash options and so hand code the parsing.

5) If you don't use one-dash options make sure to reject all positional parameters that start with a dash as this is mostly a user error. If you want to allow positional parameters that start with a dash then use the common "--" parameter to indicate that the remaining parameters are all positional parameters.

6) An option can have multiple arguments. For example, --database url userpassword.

7) Always have the options --help/-h, --version/-v, --quiet/-q, and --verbose.

8) If a command can run without options make sure the command's results are harmless. Nothing worse than incorrectly using a command and having it destroy data.

I will, perhaps, add other points for emphases at a later date. For now, do read Anders' posting.

This is where the somewhat brilliant key idea behind KSDM comes to play. The completed job does NOT free the person for working on another job, until that job is pulled into the following phase and work is started there. If work is piling up at a particular phase, those people are NOT ALLOWED to work ahead. As Taiichi Ohno makes so clear, that working ahead is waste. Instead of working ahead, they can look around to see what is wrong.

To put this in concrete terms, consider a process which involves (1) detailed design, (2) coding, (3) testing, and (4) documenting. Each of these stages you place a limit on the number of jobs, and for the sake of example lets say that limit is 4. Say for example that the coders have finished coding on their four job units, and are ready to take a new one. But the testing is backed up for some reason, still working on the last four job units, and are not ready to take a new job. The developers are not allowed to pull in a fifth job unit. There is no point in coding up more features when the earlier features are not getting tested or documented. It is also possible that because the developers are not pulling jobs from design, that the design phase becomes filled up with completed tasks.

When work backs up in this way, one should go and figure out why testing is stuck. Maybe the real problem is that the documentation is blocking test. Whatever it is, the primary job of the entire team is to identify the problem with the flow, and fix it. Do not simply work ahead accumulating a huge pile of work for “someone else”. Instead, focus on the big goal, which is to get features completed to a customer ready state as quickly as possible.

The biggest problem with Kanban is that it‘s designed for a world where things go through the line once (e.g. a carmaker).

This got me thinking about whether the problem is generally with Kanban's use in software or instead the kinds of software development Kanban that is more useful in. A service firm builds a single use or single customer tool while a product firm builds a multiple use or multiple customer tool. A service firm tends to have more tasks around the missing parts of the software while a product firm tends to more tasks around the bugs in the software. For a service firm the factory is the assembling of the parts. For a product firm the factory is perfecting the assembly. I am mostly thinking aloud here but I think it is a useful train of thought as to why Kanban works for some firms better than for others.

The whole Long Now seminar series is very good. If you have iTunes and an iPod just add their pod cast to your iTunes library. Note that the lecture's audio is free but video requires membership. Here are specific lectures I keep coming back to

Eliel asked "What's with the moribund twitter bio?" I am thinking that it must have been a "can't get anything right" kind of a day when I wrote that. But in all honesty, I am not a natural father and so find myself consciously working at it everyday and the same for marriage (perhaps every other day here). And, how many times have I been granted but never exercised stock options? I am a very optimistic guy. While everyone else (ok, many) are grasping for success I am happy working at being a little more whole one day at a time.

In the 1970s our US government representatives stopped caring about their constituents and started caring only about keeping their own job and its access to power. The corporations didn't initiate this change. They are, however, proficient at using the change. Corporations are just doing what corporations always have done and that is to maximize shareholder value by aiding representatives to keep their jobs. I hate to deliver bad news but y'all are the shareholders -- via pensions and other retirement vehicles -- and so are benefiting.

If you want to help take away the power of money in politics then support (Lawrence Lessig’s) Fix Congress First.

"Stonebraker said the problem with MySQL and other SQL databases is that they consume too many resources for overhead tasks (e.g., maintaining ACID compliance and handling multithreading) and relatively few on actually finding and serving data."

include references to the hard proof. Pushing logic from the repository to the application does not make anything faster or slower. Cumulative work is just shuffled around. Moving logic from server to client can improve the perceived speed as you have effectively added massively to CPU and RAM capacity. So, if you have a relational data model then use SQL and a relational repository. If your data model is different then use a custom repository. The different NoSQL repositories fill this custom need. Just as, for example, FOCUS fitted the custom need for hierarchical data in the 80s! As to web scale issues, the solutions are all the same no matter how your repository models data.

I need to improve my writing. I have much to say but find myself inarticulate at the keyboard or pad of paper. What I write unfolds from sentence to sentence. There is much backtracking to apply structure afterwards. Time to read some good, short journalism and advocacy pieces and digram them. I will start with Ellen Liberman's The Reporter column.

1. Unless you have a very good reason to catch an exception, don't catch it.

2. If you can correct the problem implied by the exception. eat the exception, because you fixed it. However there are some exceptions that it is unwise to catch.

3. If you can provide additional information about the exception. catch the exception, and re-throw it as an inner exception with more information. This is a very good reason to catch an exception, but note that we are still re-throwing it.

Doing this has three key advantages. The first is that the ease of adding relevant data to the message is trivial. This then encourages messages that truly aid monitoring and debugging. The second is that there is a very low performance cost to logging in that if the message is never used (due to the log level) then the only cost to making the call is loading and unloading the stack.

The second advantage becomes significant for debugging messages. For example, the debug logger call

logger.debug("{0} moving towards {1}", a, b );

is a far less costly to ignore than is

logger.debug( a + " moving towards " + b );

And especially so in a tight loop. One can argue that you can always check the debug level before making a debug call. In practice, however, this tends to not to be done and instead the useful debug message is never written.

The third advantage is that it is trivial in the logger implementation to both quote and escape the message's parameters. My implementation passes them through a JSON converter so that I can also see the internals of the parameter's values.

I currently work for small shop where we run a number of middleware services for publishers. We build services in the traditional style of Java servlets, Spring dependency injection, Tomcat deployments, ActiveMQ message bus, MySql for transient data, and Oracle or permanent data. The engineering of this style is well known to the team. But the consequence are that: Our deployments are manual; Startup times too long; Spring has driven us to a monolithic implementation. We are servicing our customer's needs. But we need to do better. How do we get better?

If you are a SaaS + Java shop in RI, MA, or eastern CT and would like to talk about tools and processes please contact me at andrew@andrewgilmartin.com.

The summer is approaching and the kids will be home. Yet I need to get work done and I suspect you do too. I would like to create a co-working space here in South Kingstown. A co-working space is a space to work from without the distractions that come with home, family, and coffee shops. It has desks, chairs, wireless, lockers, air conditioning and great lighting. And, with luck, a great location. I am looking to keep the membership costs at no more than $100/month.

If you would like to discuss this further please contact me at andrew@andrewgilmartin.com and we can organize a meeting where the first half is informational and the second is planning.

Update: Everyone likes the idea but I was not able to get anyone to commit. When you work for "free" from home, at cafes and at libraries, paying a $100/month is lot less appetizing. It will happen one day. Just not for me this summer.

Apple's Java installation does not include the source or javadoc api. Apple does have a developer download at X which does contain these files at <http://developer.apple.com/java/download/>. On my machine the package in "javadeveloper_10.6_10m3261.dmg" is installed at

/Library/Java/JavaVirtualMachines/1.6.0_22-b04-307.jdk/Contents/Home/

If you are using NetBeans you will need to add this as a new "Java Platform". This posting assumes you called the platform "JDK_1.6". Once you have created the platform you will need to manually fix how javadoc is used. To do this, in the file