Classic WTF - Yes, The Table is Still There

Our Discourse forum has received a mixed approval rating. Many folks like it, but others still
find it lacking in areas in certain clients. Well, no matter your opinion, I can guarantee that, as it was true when the article first ran, our forum doesn't have the same problem mentioned below.

When working in a develop-struction environment, it is very tempting to code in these kind of patches to the database to make sure the code will be compatible.
Of course a better approach is to have a specific admin function or page which can intelligently analyse and upgrade the underlying database to support the current code version.
I work in vertical integration which basically means shoe horning new features into a warehousing application as requested by clients.
The software allows the database to be customised with new tables and new fields in certain core tables.
Sometimes different databases have similar customisations implemented in different ways. E.g one uses a checkbox and the other has a drop down control with two options. Lots of fun when it comes to writing reports which need to work with both structures.
Hopefully I have never written code that does the same field check for every record in a table as in the article.
Interestingly, the application provides a screen editor for altering the control layout on certain screens such as stock and customers. Adding or removing a control updates the field on the underlying table. Saving the screen changes can take 5-15 minutes on a large table. Clearly every record needs to be touched somehow to make the change. But if we use the Sql application provided by the dbms vendor, the same tables can be altered in seconds.

I think the moral of the story is to use a "real world" dataset for testing that way if you make a goof it will at least be obvious.

Even in a development environment, I hope you never go around with database credentials and setting up a connection for each and every method that requires database interactivity. Let alone store separate connection references in each single object. This is first-year-developer garbage code, no matter how you look at it.

Also, software that dynamically modifies its underlying data model? That's a darn good sign that your data model needs revising. It's also a surefire way to create completely unmaintainable software really, really fast, if you are after job security.

Our Discourse forum has received a mixed approval rating. Many folks like it, but others still
find it lacking in areas in certain clients. Well, no matter your opinion, I can guarantee that, as it was true when the article first ran, our forum doesn't have the same problem mentioned below.

“Discourse: It’s better than some random TDWTF-front-page forum code”

How do you know? It does stupider things than this. And I had to manually type in what.thedailywtf.com/raw/5173/3 to figure out how to get it to post your quote without breaking the formatting.

Also, software that dynamically modifies its underlying data model? That's a darn good sign that your data model needs revising. It's also a surefire way to create completely unmaintainable software really, really fast, if you are after job security.

Depends. An example is a plug in that might be applied to something where a maintenance failure by an out-of-my-control DBA might mean that the plug in is running against an obsolete database schema. In this case, it makes sense for the plugin when first run to verify the parts of the schema to which it needs access, and depending on circumstances either silently update the schema or to exit with a suitable warning as to why it cannot run.
Any code that may go into an SME environment has to assume that the kind of support you get in a large enterprise may not exist. So long as there is a process such that every schema change to deal with an enhancement results in correct update code being put in the plugin validation, the software is far from unmaintainable.

[...] it makes sense for the plugin when first run to verify the parts of the schema to which it needs access, and depending on circumstances either silently update the schema or to exit with a suitable warning as to why it cannot run.[...]

Perhaps I should have clarified that I meant data model updates during normal operation: I would qualify your example as an installation procedure, in which case schema updates make sense. But even then I would be skeptical; either the relevant parts of the schema are not shared with other modules, in which case the plugin could well be contained in its own schema, or they *are* shared with other modules, which means that updating one plugin can break another plugin. You would need to create a dependency-list and verify that all components may be updated as well before proceeding - similar to what APT does for example. Simply going "what the hey, I need this schema modified, so let's do it!" will sooner or later break stuff.