Saturday, December 27, 2014

This was definitely a great event to share Kanban and Continuous delivery experiences with other agile practitioners in South Florida. I thank IT Palooza and the South Florida Agile Association for the opportunity.

Sysadmins should be encouraged to reach out their user base so the MACs are patched. As a difference with Ubuntu and other linux distros where most likely ntpdate is being used to synchronize time in MACs the ntpd daemon is used. Yes, this is not just a server issue when it comes to MAC OS-X.

BTW back to ntpd vulnerabilities. Follow apple instructions for correct remediation. As explained there 'what /usr/sbin/ntpd' should be run to check the proprietary OSX ntpd version.

Thursday, December 25, 2014

Do you practice careful and serious reading? This is the ultimate question I have come to ask when someone claims to have read the book and clearly I find out s(he) meant to say "eyes went" instead of "mind went" through the content of the book. There is a difference between "becoming familiar" and "digesting" a topic.

When you carefully read a book you take notes. I personally do not like to highlight books as the highlighters I have seen so far as of December 2014 will literally ruin the book. I think taking notes not only help with deep understanding of the content but ultimately it becomes a great summary for further "quick reference".

When you seriously read a book you know what you disagree and agree upon, you are not in a sofa distracted by familiar sounds. You are in a quite space, fully concentrated in receiving a message, processing the message and coming up with your own conclusions, questions and ultimately must importantly answers to the unknown which now suddenly becomes part of your personal wisdom.

It is discouraging to sustain a debate around a book content when there is not careful and serious reading. In my opinion "reading" when it comes to a specific subject matter means "studying" and of course you can only claim you have studied a subject if you have carefully and seriously read the related material. Seeing a book is not the same as looking into a book. Listening to an audio book content is not the same as hearing it.

Thursday, December 18, 2014

From command line I would expect that a simple 'svn diff local/path/to/resource' will provide differences between local copy and subversion server copy. However that is not a case as a special '-r HEAD' needs to be added to the command instead. Here is how to add an alias for 'svndiff' so that you can get the differences:

Tuesday, December 16, 2014

If you are working on a migration from classical web sites to Single Page Applications (SPA) you will find yourself dealing with a domain where all the code lives, mixed technologies for which you are forced to run the backend server and bunch of inconveniences like applying database migrations or redeploying backend code.

You should be able to develop the SPA locally though and consume the APIs remotely but you probably do not want to allow cross domain requests or even separate the application in two different domains.

A reverse proxy should help you big time. With a reverse proxy you can concentrate on just developing your SPA bits locally while hitting the existing API remotely and yet being able to remotely deploy the app when ready. All you need to do is detect where the SPA is running and route through the local proxy the API requests.

Node http-proxy can be used to create an https-to-https proxy as presented below:

Tuesday, December 09, 2014

Is your bank of favorite online shop insecure? You are entitled to act as a conscious user. How?

The first thing any company out there should do with their systems is to make sure that traffic between the customer and the service provider is strongly encrypted. All you need to do is to visit this SSL Server Test, insert the URL for the site and expect the results.

If you do not get an A (right now *everybody* is vulnerable to latest Poodle strike so expect to see a B as the best case scenario) you should be concerned. If you get a C or lower please immediately contact the service provider demanding they correct their encryption problems.

Be specially wary of those who have eliminated their websites from SSL Labs. Security *just* by Obscurity does not work!!!

Monday, December 08, 2014

The custom date in format m/d/yy is not formatted in libreoffice but instead a number is shown. This number corresponds to the serial day starting at January, 1 1900. So 5 will correspond to 1905. But there is a leap year bug for which a correction needs to be made (if (serialNumber > 59) serialNumber -= 1) as you can see in action in this runnable.

So if you convert excel to CSV for example and you get a number instead of an expected date, go to that Excel file from the libreoffice GUI and convert a cell to Date to see if the output makes sense as a date. At that point, convinced that those are indeed dates all you need to do is apply the algorithm to the numbers to convert them to dates in the resulting CSV.

Sunday, December 07, 2014

Protecting your app starts by protecting your users. There are several HTTP headers you should already be using in your web apps but one usually overlooked is Strict-Transport-Security

This header ensures that the browser will refuse to connect if there is a certificate problem like in an invalid certificate presented by a MIM attack coming from a malware in a user's computer. Without this header the user will be giving away absolutely all "secure" traffic to the attacker. Additionally this header will make sure the browser uses only https protocol which means no insecure/unencrypted/plain text communication happens with the server.

The motivation for not using this header could be to allow mixing insecure content in your pages or to allow using self signed certificates in non production servers. I believe such motivation is dangerous when you consider the risk. Your application will be more secure if you address security in the backend and in the front end, the same way you should do validations in the front end and the backend.

Friday, December 05, 2014

Do you practice Continuous Web Application Security? We have learned how to continuously deliver software and of course that means that anything you do as part of the SDLC should be done continuously including security. Just like with backup-restore tests this is a hot topic. As usual there is no simple answer about what we should all do because we should all do different things depending on our budget.

Here is though an affordable practical proposal for continuous web application security:

Have a Ubuntu Desktop (I personally like to see what is going on when it comes to the UI related testing) with Selenium server running and at least chrome driver available.

From Jenkins hit (remotely) a local runner that triggers locally running automated E2E tests against your application URL (I personally believe that E2E concerns belongs to developers, whether you have full stack engineers of dedicated front end engineers. I strongly believe that they belong to whoever is in charge of the UX/UI)

The tests normally will open chrome instances where you can see UI tests in action if you like (Did I say that when it comes to UX/UI I like to *see* what is going on in the browser?)

A proxy based passive scanner like zaproxy is installed as well. You can install it easily using a plain old bash (POB) recipe I created here BTW.

If you want to start the proxy with a user interface so you can look into the history of found vulnerabilities through a nice user interface and assuming you installed it from the recipe then run it as '/opt/zap/zap.sh' or if you get issues with your display like it happened to me while using xrdp with 'ssh -X localhost /opt/zap/zap.sh'.

In order to proxy all requests from chrome we need to follow the below steps.

If you use selenium server it must be started after you run the below commands. Google Chrome Browser should be started after these commands in order to use the proxy. If using protractor with directConnect flag set to true, then you will need to run these commands before you invoke the tool.

export http_proxy=localhost:8080
export https_proxy=localhost:8080

To stop the proxy we just need to reset the two variables and restart the selenium service .

LEGACY 12.04: For the proxy to get the traffic from chrome you need to configure the ubuntu system proxy with the commands below. All traffic will now go through the zaproxy. If you want to turn the proxy off just run the first command. To turn it on run just the second but run them all if you are unsure about the current configuration. This is a machine where you just run automated tests so it is expected that you run no browser manually there BTW and that is the reason I forward all the http traffic through the proxy:

Every time your tests run you will be collecting data about possible vulnerabilities

You could go further now and inspect the zaproxy results via the REST API consuming JSON or XML in your jenkins pipeline in fact stopping whole deployments from happening. You can take a less radical approach and get the information in plain HTML. Below for example we extract all the alerts in HTML for http://google.com (none ;-). It is assumed that you have run '/opt/zap/zap.sh -daemon' which allows to access from http:/zap base URL the REST API:

A last note about security. Make sure you never use the environment we have built here to go through sites that are not those you are testing. Remember that you have added the OWASP root certificate to your browser which means other people having the same certificate could do a number of nasty things with a user working behind this browser setup.
Congratulations. You have just added continuous web application security to your SDLC.

How to parse any valid date format in Talend? This is a question that comes up every so often and the simple answer is that there is not plain simple way that will work for everybody. Valid dates format depend on locale and if you are in the middle of a project supporting multiple countries and languages YMMV.

You will need to use Talend routines. For example you can create a stringToDate() custom method. In its simpler form (considering you are tackling just one locale) you will pass just a String a parameter (not the locale). You will need to add the formats you will allow like you see in the function below. The class and the main method are just for testing purposes and you can see it in action here. These days is so much easier to share snippets of code that actually run ;-)

Wednesday, December 03, 2014

Are your web security scanners good enough? Note that use plural here as there is no silver bullet. There is no such thing as the perfect security tool.

More than two years ago I posted a self starting guide to get into penetration testing which brought some interest for some talks, consultancy hours and good friends.
Not much have been changed until last month when in the Google Security Blog we learned that a new tool called Firing Range was been open sourced. I said to myself "finally we have a test bed for web application security scanners" and then the next question immediately popped up "Are the web security scanners I normally use good enough at detecting these well known vulnerabilities?".
I would definitely like to get feedback private or public about tool results. For now I have asked 4 different open sourced tools about the plans to enhance their scanners so they can detect vulnerabilities like the ones Firing Range exposes. My tests so far are telling me that I need to look for other scanners as these 4 do not detect the exposed vulnerabilities. I have posted a comment to the Google post but it has not been authorized so far. I was after responding the main question in this post but then I realized that probably if everyone out there run their tests against their tools (free or paid) we could gather some information about those that are doing a better job as we speak in terms of finding Firing Range vulnerabilities.
Here is the list of my questions so far:

Wednesday, November 26, 2014

If you grep your linux server logs from time to time you might be surprised at the lack of an error level. If you want to know for example all error logs currently in syslog, how would you go around it? Simple answer you cannot without changing the log format in /etc/syslog.conf.

Let us say you configured to see the priority (%syslogpriority%) as the first character in the log file:

Monday, November 24, 2014

Being an Effective Manager means being an Effective Coach. A coach needs to know very well each client. They will all be different, they will have different objectives, they will be able to achieve different goals. However all of clients must be willing to be trained and coached. The coach bases his individual plan on the client existing and needed skills, the ability to perform based on personal goals and the personal will of the individual to succeed. The relationship is bidirectional though, if the will is poor on any side, and/or the ability does not match expected goals, and/or if the skills does not match expected level then the client and the coach are not a good fit for each other.

Management is not any different. Each person is valuable one way or another but nobody is a good match for all type of jobs. The will is necessary, no brainer. As team member is expected to be driven by a will to contribute to the culture, value and profit of the whole group. Skill and Ability are a different story.

The difference between skill and ability is very subtle. I tend to think that a team member has an ability problem when all the manager resources to improve the skills of such "direct" have been tried without success. Of course the development of skills and ability of the "direct" will be affected by the skills and ability of the manager. So then how can we be effective managers?

To be effective managers we need to know each of our directs. They are all different and so in order to set them all for the biggest possible success we need to work in a personalized way with them. I think Managers have a lot to learn from the Montessori school. This is a daily task, if you are or want to be a manager you have to love teaching and caring for others successes.

Wednesday, November 12, 2014

We got this error after upgrading libreoffice as it replaced the symlink "/usr/bin/java -> /etc/alternatives/java" which was still java 6. This issue was corrected in Java 7 so replacing the symlink with the hotspot jdk7 or up should resolve the problem.

Monday, October 27, 2014

Security wise you should check if your website is still using the weak SHA1 algorithm to sign your domain certificate. Marketing wise as well. With Chrome being one of the major web browsers in use out there your users will feel insecure very soon in your website unless you sign your certificate with the sha256 hash algorithm.

Google has announced Chrome will start warning users who try to visit websites that still use sha1 signature algorithm to generate their SSL certificates.

You can of course use https://www.ssllabs.com/ssltest/analyze.html?d=$domain to test those sites available to the wild. For intranet though you need a different tool which happens to work of course also for external sites:

Saturday, October 25, 2014

Wednesday, October 22, 2014

The real question is why public key authentication is not available. Storing passwords and maintaining them secure is a difficult task specially when those are supposed to be used from automated code.

For some reason you still find servers and clients (which we do not control) that accept only passwords for authentication. My advice is educate but in many cases you simply are out of business if you do not "comply". Interesting ...

If you must connect using password then the below should help. Suppose you have a batch file with sftp commands for example a simple dir command (and others). You can send those to the lftp command:
Use this at your own risk. Do not use it before communicating the risks.

Wednesday, October 08, 2014

The below error would look like lack of permissions however permissions hadn't change neither the user desktop environment where credentials were saved:

svn: E175013: Access to '/some/dir' forbidden

Looking inside "auth/svn.simple/*" I found a password that I tried but did not work. Password was incorrect and the easiest way to correct the situation is to force to connect with the user again, password will be prompted and after supplying it the svn.simple/* file will get updated:

Thursday, September 11, 2014

Microphone not working in Windows 7 guest running on Virtualbox (v4.3.16) from a MAC OS X (Mavericks) host?

The very first thing you should do is to change the Audio Controller in VirtualBox. Below is a setting that worked for me:

Windows will complain about not finding a suitable audio driver but if you know that all you need is to install a "Realtek AC'97 Driver" for "Windows 7" then you will find http://www.ac97audiodriver.com

After installing the driver followed by a Windows restart you should be able to setup the microphone using the "configure/setup microphone wizard" options. If you get into troubles make sure "properties/advanced" show as default format "2 channels, 16 bit, 44100 Hz (cd quality)" as shown below:

I have read on the web the suggestion that this setting should match the host settings which you can set from the "Audio MIDI Setup" application. I have tried different combinations and it still works, the important thing is to rerun the setup microphone wizard to make sure distortion is kept to a minimum. In any case I left mine with the below settings:

In my personal experience over bloated requirements as the norm. So I have the practice (even in my personal life) to analyze the root cause of problems to try to resolve as much as I can with the minimum possible effort. The 80-20 rule becomes my target and unless someone before me actually applied it I can say that in most cases at least I can present an option. Whether that is accepted or not depends on many other factors which I rather not discuss here.

Why this rule is useful in Software development is a well known subject. But making the whole team aware of it comes in handy when there is clear determination to be occupied 100% of the tine in producing value. If we can cut down to 20% the apparently needed requirements our productivity would literally skyrocket. Of course the earlier you do this analysis the better but be prepared because perhaps you, the software engineer will teach a lesson to everybody above when you demonstrate that what was asked is way more than what is needed to resolve the real need of the stakeholder.

Saturday, August 23, 2014

How to eliminate the waste associated to prioritization and estimation activities?

5 minutes per participant meeting: Business stakeholders periodically (depending on how often there are slots available from the IT team) sit to discuss a list of features they need. This list must be short and to ensure that it is stakeholders come with just one feature per participant (if the number of slots is bigger than the number of participants then adjust the number of features by participant). Each feature must be presented with the numeric impact for business ( expected return in terms of hours saved, dollars saved, dollars earned, client acquisition and retention, etc ) and a concrete acceptance criteria ( a feature must be testable and the resulting quality must be measurable ). Each participant is entitled to use 5 minutes maximum to present his/her feature. Not all participants will make a presentation. Sometimes the very first participant shows a big saving idea that nobody else can compete with and the meeting is finalized immediately. That is the idea which should be passed to the IT team.

The IT team does a Business Analysis, an eliciting of requirements. The responsible IT person (let us call that person Business Analyst or BA) divides the idea implementation in the smallest possible pieces that will provide value after an isolated deployment, no bells and whistles. In other words divide in Minimal Marketable Features (MMF)

The BA shares with the IT development team the top idea and the breakdown.

IT engineers READ the proposal and tag each piece with 1 of a limited number of selections from the Fibonacci sequence representing the time in hours (0 1 2 3 5 8 13 21 34 55 89). Hopefully there is nothing beyond 21 and ideally everything should be less than 8 hours (an MMF per day!!!)

BA informs Business and gets approval for the MMFs. Note how business can now discard some bells and whistles when they realize few MMFs will provide the same ROI. Ideally the BA actively pushes to discard functionality that is expensive without actually bringing back a substantial gain.

Developers deliver the MMFs and create new slots for the cycle to start all over again.

The Organization can calculate an expected Return on Investment (ROI) for the Minimal Marketable Features related to any idea that without doubt should be implemented next. All that without unnecessary "muda" (lean term for waste) related to prioritization and estimation.

Monday, August 18, 2014

I can't help to look at PM from the Product Management angle rather than from the Project Management angle. The three constraints (scope, schedule and cost) might be great for building the first version of a product but enhancement, maintenance, the future is a different story. Without a constant pace delivery it will be difficult to remain competitive. That constant pace cannot be supported if quality is not the number one concern in your production lane.

Sunday, August 17, 2014

Speaking about software productivity, does your team write effective code?

Most people think about efficiency when it comes to productivity. This is only logical as most people think tactically in order to resolve specific problems. These are our "problem solvers". However as a reminder Productivity is not just about Efficiency but firstly about Effectiveness. Thinking strategically *as well* will bring to the team the maximum level of productivity. These, effective programmers are "solutions providers" What can we do to be effective programmers?

Dr. Axel Rauschmayer in his book Speaking Javascript, Chapter 26 explains, IMO, what the effective software development team should do. This applies to any programming language BTW. This is what I take from his statements. This is what I support based on my own experience as a programmer:

Define your code style and follow it. Be consistent.

Use descriptive and meaningful identifiers: "redBalloon is easier to read than rdBlln"

Break up long functions/methods into smaller ones. This will make the code *almost* self documented

Use comments only to complement the code meaning to explain the *why* and not the how

Use documentation only to complement the comments meaning provide the big picture, how to get started and a glossary of probably unknown terms

Write the simplest possible code which means code for a sustainable future. In the words of Brian Kernighan "Everyone knows that debugging is twice as hard as writing a program in the first place. So if you are as clever as you can be when you write it, how will you ever debug it?"

The effective programmer works as a "solutions provider" and not just as "problem solver".

Friday, August 01, 2014

Does Project Management provide Business Value? A similar question came up in LinkedIn and I decided to share my ideas on it.

Project Management is part of any product lifecycle. It is a discipline that should help a team achieve a specific goal. It is needed either as a responsibility of a dedicated individual/department or the whole team.

A team should achieve "predictable delivery with high quality" and for that to happen you will need to measure several productivity KPI. In the words of Joseph E. Stiglitz “What you measure affects what you do,” and “If you don’t measure the right thing, you don’t do the right thing.”.

So IMO if the PM discipline adjust to these ideas PM discipline is to be considered 'an integral part of the overall success of the team'. If these ideas are still not introduced in your team then the PM discipline is a 'must-do' to get you to new levels of productivity. If the PM discipline is thought to be in place but not adjusting to these ideas I would definitely consider it an 'overhead'.

Thursday, July 03, 2014

Ever wondered why you got an email with "to:" being some other address and not yours? Perhaps you got an email stating "to: Undisclosed recipients:;", why? you might have asked.

To simplify the explanation here the RCPT command using the verb “to:” is used to direct the email to a specific address. In the last mile the email will be received by the addressee but only the "to:" header if present will be accessible. If it is missing you get "to: Undisclosed recipients:;" and if it is set you get whatever it says. Clearly you can use a different email address there which will generate in some cases a heck of confusion ;-). You can confirm this yourself just by using telnet as usual for SMTP:

Tuesday, July 01, 2014

If cshell (csh) is your default shell after you login ~/.cshrc will be parsed. If you get the error "variable syntax" your next step if to figure out what shell config file is declaring a variable incorrectly. The below is correct:
Just remove the braces and you will end up with the "variable syntax" error.

Friday, June 27, 2014

Life would be easier if command line tools would never use exit code different than zero unless as a 'real' error pops up. The fact that I am trying to install again an existing package should not result in an 'error' but Solaris returns status 4 when running 'pkg install' with description "No updates necessary for this image.". You have no other option than handling this in a per package basis like I show below using a Plain Old Bash (POB) recipe:

How difficult is to report JIRA Worklog? There are several plugins and a couple of API call for free but none of them so far can report on a basic metric: How many hours each member of the team worked per ticket in a particular date or date range.

I do not like to go to the database directly but rather I prefer API endpoints, however while I wait for a free solution to this problem I guess the most effective way to pull such information is unfortunately to query the jira database directly.

Below is an example to get the hours per team member and ticket for yesterday. You could tweak this example to get all kind of information about worklog.
If you want to include custom fields like 'Team' see below:

Saturday, June 14, 2014

What versus How defines Effectiveness versus Efficiency and Strategy versus Tactics. They are not to be confused. The first is the "crawl", the second is the "walk" and without them both you will never "run". Following baby steps works for business as well as for nature.

What do you produce? Is it what the customers need or what you think they should need? Being effective means to do *what* is required, no more, no less. Have a solid strategy to be effective.

How do you produce it? Are you focused on predictable delivery with high quality or on resource utilization? Being efficient means to focus on "how" to do the whole job on time and on budget. Being efficient means to create tactics that align completely with the strategy.

Mastering these concepts is crucial for any leader. The great leader is a strategist who creates and oversees the tactics being used so that they comply 100% with the strategy. Defining "what" to do (the goal, the mission, the end), is step number one. Determining "how" it will get done (the effort, the actions, the means) is the second step. This means your second step must never overtake the first.

Productivity is the ratio between the production of "what" we do versus the cost associated to "how" we do it. It is a result of how effective and efficient we are. It is ultimately the leader's performance review. You need no review performed by a supervisor to know where you stand as a leader. Have a strategy and constantly monitor that your tactics comply with it. If the productivity goes up you are a great leader. Be proud of yourself and sell that as your greatest skill.

Friday, June 13, 2014

Is Java slow for web development? Code, compile, deploy is a necessary evil, but not for development. We want just to change and test.

Even those that decide to go with interpreted languages at some point need to compile and deploy for scalability purposes. This is not rocket science. As an oversimplified explanation, if the runtime application code needs to be interpreted every time it runs then resources are used inefficiently.

When Java Servlet specification appeared in the market at the end of the 90's we were already coding web dynamic pages using CGI (C and even unsafe unix power tools), Perl and PHP. We were developing fast indeed, Why did we move towards Java? There is no simple answer but for one Java scaled to way more concurrent users.

And yet we were coding Model 1 at the beginning. That meant we could put the code in the infamous JSP scriptlets and see the results immediately in the browser just as PHP did.

Then several papers convinced us that separation of concerns were needed and we moved to Model 2 where the application logic was now in compiled servlets and the presentation code was in JSP. At that point the JVM should have had what it didn't have for years: Dynamic Code reloading.

In the early 00's Sun shipped JDK 1.4 with Hotswap to address the issue, but only partially. Only changes in methods would be dynamically reloaded so if you change anything from a simple method name to a new class you will need to recompile and redeploy.

In 2000 though JUnit hit the market and many Java developers have relied on tools like automated compilation and test run from CLI or IDE. This technique has allowed us to rapidly develop functionality while providing automated test cases. Of course when the time comes to test in the application server fast code replacement is a must have. The pain continues when not only dynamic languages like python and ruby are more developer friendly but on top of them new frameworks appear offering rapid code generation.

At the end of 2007 jrebel hit the market. Since its inception it has been free for Open Source Projects but Commercial for the enterprise. Clearly still an issue for small companies like startups where you want to save every penny.

Concentrated on the efficiencies of the runtime the JVM has not evolved as we all would have expected. Instead the JVM has become the foundation to run more dynamic languages like ruby and scala for example.

Many people have moved to languages like Groovy, others have moved to use frameworks like Play but the common denominator has been the lack of an effective Hotswap engine.

Enough of history. It is 2014 and here is how you patch the latest version of jdk 1.7 ( Once we conclude our java 8 migration I will be posting instructions in this blog ) to allow in place class reloading. In addition I am including how to deploy HotSwapAgent for your typical MVC java application. HotswapAgent supports Spring, Hibernate and more. I have tested this with jdk1.7.0_6 (jdk-7u60-linux-x64.tar.gz):

Tomcat reloads the application context when a change is detected in a class file in WEB-INF/classes directory. Resin reloads just the class out of the box which is more efficient. When combined with Dynamic Code Evolution VM (DCEVM) and HotSwapAgent you could cut on development time as the changes could include more serious refactoring like renaming methods.

Here is how I tested in resin an application previously running in tomcat which uses Spring, JPA and Hibernate.

Download resin open source version from http://caucho.com/products/resin/download/gpl#download

If you use log4j in your application then replace catalina.home for the full local path in log4j.properties (if you use log4j) for example:

Resin is less permissive in terms of schema validations. Tomcat would allow "xsi:schemaLocation" in taglib tag to be all lowwer case. You can either correct taglibs or use the below in WEB-INF/resin-web.xml:
If you have any problems testing resin look for answers or post your question in the resin forum.

Wednesday, June 11, 2014

Install ucdetector for Eclipse. I would love to have a command line tool to address the same, probably subject for another research. For today I was able to undersatnd how many SOAP requests were actually used from the current code.

We found this issue after migrating our jenkins and artifactory servers:

===[JENKINS REMOTING CAPACITY]===channel started
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/krfsadmin/.jenkins/cache/artifactory-plugin/2.2.2/slf4j-jdk14-1.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/krfsadmin/.jenkins/cache/jars/3E/F61A988E582517AA842B98FA54C586.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
Failed to instantiate SLF4J LoggerFactory
Reported exception:
java.lang.NoClassDefFoundError: org/slf4j/spi/LoggerFactoryBinder
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:792)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClassFromSelf(ClassRealm.java:386)
at org.codehaus.plexus.classworlds.strategy.SelfFirstStrategy.loadClass(SelfFirstStrategy.java:42)
at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:244)
at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:230)
at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClassFromParent(ClassRealm.java:405)
at org.codehaus.plexus.classworlds.strategy.SelfFirstStrategy.loadClass(SelfFirstStrategy.java:46)
at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:244)
at java.lang.ClassLoader.loadClass(ClassLoader.java:411)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.slf4j.LoggerFactory.bind(LoggerFactory.java:129)
at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:108)
at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:302)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:276)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:288)
at hudson.maven.Maven3Builder$MavenExecutionListener.(Maven3Builder.java:352)
at hudson.maven.Maven3Builder.call(Maven3Builder.java:114)
at hudson.maven.Maven3Builder.call(Maven3Builder.java:69)
at hudson.remoting.UserRequest.perform(UserRequest.java:118)
at hudson.remoting.UserRequest.perform(UserRequest.java:48)
at hudson.remoting.Request$2.run(Request.java:328)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.lang.ClassNotFoundException: org.slf4j.spi.LoggerFactoryBinder
at org.codehaus.plexus.classworlds.strategy.SelfFirstStrategy.loadClass(SelfFirstStrategy.java:50)
at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:244)
at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:230)
... 35 more
channel stopped
ERROR: Failed to parse POMs
java.io.IOException: Remote call on Channel to Maven [/opt/jdk/bin/java, -Dfile.encoding=UTF-8, -Dm3plugin.lib=/opt/jenkins/plugins/artifactory/WEB-INF/lib, -cp, /home/krfsadmin/.jenkins/plugins/maven-plugin/WEB-INF/lib/maven3-agent-1.5.jar:/opt/maven/boot/plexus-classworlds-2.x.jar, org.jvnet.hudson.maven3.agent.Maven3Main, /opt/maven, /opt/apache-tomcat-7.0.52/webapps/jenkins/WEB-INF/lib/remoting-2.41.jar, /home/krfsadmin/.jenkins/plugins/maven-plugin/WEB-INF/lib/maven3-interceptor-1.5.jar, /home/krfsadmin/.jenkins/plugins/maven-plugin/WEB-INF/lib/maven3-interceptor-commons-1.5.jar, 44464] failed
at hudson.remoting.Channel.call(Channel.java:748)
at hudson.maven.ProcessCache$MavenProcess.call(ProcessCache.java:160)
at hudson.maven.MavenModuleSetBuild$MavenModuleSetBuildExecution.doRun(MavenModuleSetBuild.java:843)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:518)
at hudson.model.Run.execute(Run.java:1706)
at hudson.maven.MavenModuleSetBuild.run(MavenModuleSetBuild.java:529)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:231)
Caused by: java.lang.NoClassDefFoundError: org/slf4j/spi/LoggerFactoryBinder
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:792)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClassFromSelf(ClassRealm.java:386)
at org.codehaus.plexus.classworlds.strategy.SelfFirstStrategy.loadClass(SelfFirstStrategy.java:42)
at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:244)
at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:230)
at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClassFromParent(ClassRealm.java:405)
at org.codehaus.plexus.classworlds.strategy.SelfFirstStrategy.loadClass(SelfFirstStrategy.java:46)
at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:244)
at java.lang.ClassLoader.loadClass(ClassLoader.java:411)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.slf4j.LoggerFactory.bind(LoggerFactory.java:129)
at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:108)
at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:302)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:276)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:288)
at hudson.maven.Maven3Builder$MavenExecutionListener.(Maven3Builder.java:352)
at hudson.maven.Maven3Builder.call(Maven3Builder.java:114)
at hudson.maven.Maven3Builder.call(Maven3Builder.java:69)
at hudson.remoting.UserRequest.perform(UserRequest.java:118)
at hudson.remoting.UserRequest.perform(UserRequest.java:48)
at hudson.remoting.Request$2.run(Request.java:328)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.lang.ClassNotFoundException: org.slf4j.spi.LoggerFactoryBinder
at org.codehaus.plexus.classworlds.strategy.SelfFirstStrategy.loadClass(SelfFirstStrategy.java:50)
at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:244)
at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:230)
... 35 more
[ci-game] evaluating rule: Build result

Not sure if the cache directory was copied over but the bottom line is that the duplication of jar files can be eliminated of course when as we eliminate any duplication:

Sunday, June 08, 2014

Is a report any different than other application Views? I don't think so. Your architecture should have several View/Renderer Engines and Controller/Processor Engines. When would you call "Report" any given View?

The Latin word "reportare" means "carry back". When we build reports as Software Engineers all we do is accept some parameters or a simple request and bring back a response with a specific format in a specific media. As it happens when we deliver a paper after a phone call in Software we deliver a pdf/excel/word file or embedded view in a native application or a website. It can be just plane or formatted text in a native application, a simple HTML fragment for a web application or website, and the list goes on.

But what is the difference between this "response" and any other application response? Is it that a report is created with a WYSIWYG application where datasets and a specific Domain Specific Language (SDL) is used? Is it that they take longer to be produced than a regular view?

IMO deciding what is a report and what is not is certainly difficult in some cases. For example if you ask a Microsoft Engineer for a report out of certain transformations that are necessary, the solution will include some SSIS and SSRS projects. Here you use two tools which hopefully you can combine behind a user interface and make it transparent to the user.

Any software application we build will have some kind of user interaction and a response to that user interaction (Even if some Views result from automated tasks and are sent as email attachments, they still have at a minimum some hardcoded setup and most likely some configuration to read before executing). I sustain that a report is not any different than any other application View. It needs a renderer engine and a processor engine. The combination of processors and renderers can result in the most complicated logic being executed and *reported* back to the user. The application *reports* constantly to the user.

Use any tool that makes sense to build your application Views. Call some of them "report" if you want. But at the end of the row you need View/Renderer Engines and Controller/Processor Engines available to "carry back" to application users the response they are expecting. The combination of these two engines give you the power to deliver an application structure that meets any demand without having to argue if what you are building is actually a "report". If you are carrying back information to the user you are reporting to the user.

ETL and Report tools are just that, tools. The architecture should not name components based on the tools being used.

Saturday, June 07, 2014

This error happened after we migrated Jenkins to a bigger server. The configurations were alike and the mail server configured was internal. Apparently there were no mail changes so why the error I don't have a clue. I ended up authorizing the domain.

Friday, June 06, 2014

Continuous delivery must not affect user experience. What to do then to support restarting a JEE application? Think about the Front End. Have a rich web front end tier and present the user a message like you have seen before in sites like Gmail:
The above is of course related to the lack of connection but you can easily inform the user that you are retrying for other issues as well. See below a different message Gmail sends back to the user when for example I change my /etc/hosts to point mail.google.com to yahoo.com ;-)
This creates the opportunity to restart the server and catch any 5XX HTTP Errors, then retry until the backend server is back. Needless to say it is expected that your backend server comes back sooner than later.

Saturday, May 31, 2014

I am reluctant to accept th emyth that Java web applications don't fit well in agile environments. The main criticism is the fact that unless you use a commercial tool, a plugin architecture or an OSGI modularized app you will end up having to restart the server in order to deploy the latest application version.

But what about if actually the time that the application would take to load would be few seconds? Will a user differentiate 10 seconds delay originated from a slow database access or backend channel web service request in comparison with a server restart? The answer is: No, the user experiencing a delay does not really care about the nature of it. If we maintain a proper SLA this will be simply "a hiccup".

Even the fastest web services out there would be slow for certain random actions. You will never know if Gmail is taking longer because of a quick server restart for example. As long as you can get what you need a wait of few seconds won't natter.

If the definition of "Done" is "Deployed to production" developers will take more care of deployment performance. Waiting long time for a deployment, means disruption. Business will never approve minutes of downterm. On the other hand if you increase your WIP limits you will slow down, quality will suffer. This bottleneck becomes a great opportunity for improvement as expected. You have reach a Kaisen moment.

There is a need to work on deplopyment performence. Without addressing that important issue you will constantly delay deployments, you will constantly get more tasks piling up, the consequences will be terrible and will only be discovered if you are actually visualizing the value stream. You need to tune your server and your application for it to load faster. A restart should be a synonym of a snap.

In a typical Spring application you will proceed as with any other application. Logs are your friends. Go ahead and turn on debug level. Search for "initialization completed" and confirm how much time this process takes. In production you better use lazy initialization:

<beans ... default-lazy-init="true">

This contributes of course to the overall "Server startup". But there is more to do. Check this out.

It should become evident from simple log inspection what is the culprit for a slow server startup. Let us review the below example:

The first clear issue is that ”Initializing Spring root WebApplicationContext” takes 36 seconds which is almost half of the time the whole server takes to startup. The second issue is that “Initializing Spring FrameworkServlet” takes 14 seconds which is a moderate 10% of the whole server startup time. Spring tuning is needed in this example.

What about the other 40% of the time? Servers also can be tuned. For tomcat there is a lot we can do. For example, like explained in the link, if you find an entry for "SecureRandom" in catalina.out most likely your server is spending valuable seconds generating random patterns for use as session id. Using the below setting saves you those seconds as explained in the link:

-Djava.security.egd=file:/dev/./urandom

I found myself saving ten seconds by adding the attribute and node shown below. The explanation again can be found in the provided link:

...
<web-app ... metadata-complete="true">
<absolute-ordering />
...

Eliminating unnecessary jars demands to list them all first. Note that I am sorting them on purpose just in case we are using one in the app that is already included in the container or if we are using two versions of the same (which is impossible if you are using a plugin to check for suplicated anayway):

find /opt/tomcat/ -name "*.jar"|sed 's#.*/##'|sort

Then find which classes are inside the those you are unsure if you need or not. Now you do need the whole path so let us pick as an example jtidy-r938.jar which hibernate includes as dependency. Here are the relevant commands which you will need to adapt according to your machine paths:

failonstatus - A single or comma-separated list of HTTP status codes. If set this will force the worker into error state when the backend returns any status code in the list. Worker recovery behaves the same as other worker errors. Available with Apache HTTP Server 2.2.17 and later.

However as soon as the status code is returned for the first time by the backend the proxy sends it back to the client. This behavior should be configurable with for example SilentOnStatus which works just as FailOnStatus but it prevents feedback to be sent back to the client.

As it stands our only resource is to create an ErrorDocument and include some logic to automatically retry again while communicating the user that a recovery from the error is coming soon. For example you could redirect to the domain root after five seconds with Meta Refresh:
This is a feature needed to make sure users do not get an error message when an application server is restarting and so unavailable (500) or when it is available but at a point where the application has not been loaded (503).

Thursday, May 22, 2014

Apache was returning 500. From logs:
The open ssl self certificate validation would say:
So it will not state the classical "Verify return code: 10 (certificate has expired)" when indeed the certificate is expired. That is why you better check for expiration directly:

Thursday, May 15, 2014

Talend Open Source needs Dynamic Schema for Delimited files. Only the commercial version allows dynamic schema.

We need to build a component called tFileInputDelimitedExtract. You could use tFileInputCSVFilter as a reference for the implementation. Unfortunately I don't have the time at the moment for this implementation but at least let me enunciate the specifications for it in case someone decides to go further with the implementation. It could be a good project for someone willing to learn talend component creation for example.
At the moment a quick "hack" for new unexpected inner columns would be to use 'cut' to exclude them. Below we remove the 7th unneeded column from a pipe delimited file:

Wednesday, May 14, 2014

Where did your Java architecture go?. Classycle might have good answers for you. It is easier to configure than jdepend which was the de facto open source cycle analyzer before and which you might still want to check out.

Here is how to analyze the spring core jar for example. Even though the below uses plain command line, an eclipse and maven plugins are available at the moment so you might want to check those out. Specially you should build ddf files to enforce your architectural layers and make sure the build fails if it is violated.

In the apache configuration file for the "Location" directive use "Satisfy" before "Require". Note that you might have a second "Require" directive below a "LimitExcept", make sure you *also* use the "Satisfy" there, for example:

Friday, May 09, 2014

Is TDD dead? The question drove today the hangout between Kent Beck, Martin Fowler and David Heinemeier Hanssom.

David is challenging the TDD supporters stating that doing TDD feels most of the time unnatural and not enjoyable. While Kent and Martin agree that TDD is not the perfect solution to resolve all problems they argue it has had a tremendous value for many projects they have had in their hands.

Probably Test Driven Development is not a good technique for all projects, however Test Driven Delivery is. I mean you would never come up with "continuous delivery with high quality" if you do not think about how would you test the feature you are about to ship up front.

Have you ever wondered why so many development teams state exactly the same "Business does not know what they want"? Probably if Business would think about how to test their idea once it is implemented they would not ask for unnecessary features and forget about important ones.

Have you ever wondered why the defect ratio is making impossible for the team to deliver a feature in less than a week? Perhaps if not only user stories but a clear acceptance test criteria (and test cases derived from it) would have been provided the developer would have been automated them because of course the developer is lazy and will not spend time testing manually in two, three or four different environments.

I would say Test Driven Delivery is very much alive. Is the enforcement of Test Driven Development not good?, probably yes if it is an imposition and not a necessity. Velocity and enjoyment cannot be increased at the expense of business value creation.

"In what ways can a desire to TDD damage an architecture?" is the question Martin proposed to be answered. Show us the numbers for a conclusive answer.

There is definitely a way to go from few features delivered once a month to increasingly delivering new features at a quicker pace to achieve multiple daily deployments. That cannot be achieved without the confidence Kent is advocating for.

Make sure issues are required to come with test cases up front, ideas are required to come with acceptance criteria up front and make sure the tests run before the feature is considered delivered.

If Business complains about too much time being spent on testing then keep a backlog of all the acceptance criteria test cases that can be manually followed but were not yet automated, measure the defect ratio AND the cycle time to deliver features (not bug resolutions). Switch back to providing the tests and measure again. The numbers should show that over a period of 3 months the team is able to deliver more features when providing test automation. But ultimately it will demonstrate that having test procedures documented is the very first step to deliver beautiful software that just work as intended, with no more and no less than what is actually required to make money.
IMO Quality is the most important non functional requirement.

Thursday, May 08, 2014

I had to investigate an issue related to slow NFS writes from a VMWare Solaris VM.

To debug protocols issues you need to use of course TCP packet sniffers. So I started with the following test for Solaris:
Basically we create a 5MB file and transfer it via NFS. The file was taking two minutes to be transferred. The result from /tmp/capture uncovered a lot of DUP ACKs:
From a Linux box we then run something similar:
And then I confirmed it the write went fast and with no DUP ACK. After we shipped the issue to Infrastructure they found out the culprint to be the usage of a conflictive network adapter in VMWare. Using vmxnet3 network adapter looks to be the right option when it comes to supporting NFS traffic. No DUP ACK anymore.

Who should define Usability? Ask your users. They know better than anybody else.

Even when you have an argument about a backend implementation to deliver a feature try to think about the final impact it will have in UI and UX and then simply ask your users if in doubt, their responses will drive you to come up with the best and simpler approach. Always know *why* you are doing what you are doing.

Administrators should not be able to login from the wild for security reasons. This is something Unix and later Linux got right up front. If you want to become a super user or administrator you need to do so after you have gained access to the target system. You still see people doing all kind of stuff to overcome this "limitation". Don't do it!

Nowadays everything needs to be accessible from everywhere, JSON services feed Web Applications and native mobile applications. The trend will continue with the Internet Of Things (IoT), wearables, you name it. But we cannot forget about the basics: An application administrator should not have access to the application from the wild. In fact several other roles should better be restricted to have access to the application only from internal networks. Exposing too much power publicly (even if strong authentication and authorization mechanisms are used) is a vulnerability that we can avoid if we are willing to sacrifice usability for privileged accounts.

The Administrator does not need the same level of usability as the rest of the users. Higher privileged accounts might not need them either. Be wise about IP authorization.

Disclosure of ID data or Predictable ID format vulnerabilities are considered low risk. In fact you can search around and you will find not much about it. For example most folks will highlight the advantages of using UUID versus numbered IDs when it comes to load balancing but few will acknowledge the security issue behind the usage of predictable numbers.

Don't be misled by risk classifications, these vulnerabilities can be serious and could cost companies their mere existence.

I hear statements like "well, if the account is compromised then there is nothing we can do". Actually there is a lot we can do to protect the application against stolen authentication. Double factor authentication is one of them which many times is associated to just the authentication phase but which can be used also as added authorization protection. Sometimes it is just about compromises with usability.

Disclosure of ID data is about listing views. A list should never provide sensitive information. If you want to access such thing you should go an extra step and select first the entity to see that information in the details page only. However there is little protection on doing that. The IDs are still in the list view and from those each detail view can be retrieved. Avoiding listing pages that lead to sensitive information sounds like the only possible defense but still a difficult one to sell. IMO listing pages should exist only for those knowing what they are retrieving, for example records should be originated only when providing keywords like names, addresses, known identifiers, etc.

Predictable ID format is about detail and form views. These types of views will demand the use of an ID. If that ID is predictable like an incremental number then someone can easily pull the sensitive data for all IDs. If your current model uses sequential, general numeric IDs or even symmetric encrypted IDs you should consider using a map from those to a random stored value. You could achieve this if you generate a random UUID per real ID and store it in a mapping table. You can then expose to the user just the UUID while still persisting the real ID.

Defense is to be practiced in depth. Even if the account is compromised you can still avoid a list of sensible information across your client base to be accessible from the wild.

Solaris killall command kills all active processes rather that killing all processes by name. This is confusing for those more used to Linux as the command killall in Solaris as per man pages "kill all active processes" but in Linux you read "kill processes by name". Use something like the below in Solaris:

On Defense in Depth: Web Application Security starts at checking the source IP. Even if you have firewalls and proxies in front of your application server you must have access as a developer to the original IP for the user and the URLs managing private information must be restricted to Intranet use.

Let us say for example that you have prepared a report with a list of users with some sensitive information (like their real home addresses) for certain high privileged roles only. Let us supposed this has ben exposed not only in your intranet web front end but also on the one facing the outside. Right there your issues start. Now, the user can access this information from outside the company premises which means it will be of public knowledge if the user session is compromised.

However if you have designed your middle tier to check for the source IP the user won't be able to access the service from outside even if the functionality leaked for whatever reason.

It is then crucial that all sensitive information related HTTP endpoints are identified. Those should not allow external IPs. It is also crucial to inspect your logs and confirm that you are getting the real IPs from the users that are hitting your system.

Sunday, May 04, 2014

Friday, May 02, 2014

So you have accidentally deleted vfstab in Solaris? You should look into /etc/mnttab:
You can recreate /etc/vfstab out of it but you will need some understanding of the different fields. Or you can always look at a similar machine for guidance. For the above in /etc/vfstab we will end up with:
Just run 'mount -a' to verify everything will mount correctly. Good luck!

Thursday, May 01, 2014

Use PDF Bash Tools for quick pdf manipulation from command line. Ghostscript and xpdf, both open source are a great combination to get the most difficult PDF transformations done.

If your BI Framework / Tooling does not have good solutions for processing pdf files ( like it is the case of Talend ) then you can leverage your old friend, the shell and in specific bash. Simple and effective.

Continuous integration ( CI ) makes technology debt irrelevant. Technology bugs are addressed right away to make the CI pipeline happy. There is no need for business consent to open an application bug, nor prioritization related cost, nor user base penalties because of instability.

I would go further and challenge: Why refactoring needs a special tech debt issue tracking number? The team has a velocity / throughput and just needs to strive to make it better. The team knows better what technology debt needs to be addressed with urgency when a ticket related to the code in need pops up. There is no better way to make the team member aware than a marker in the code (TODO ?).

This shift from a ticketing system back to code annotations will allow the team to understand that "nice to have" are equivalent of YAGNI and should be discarded, it will eliminate operational cost around organizing issues that only the team really understand and for which business operations and analysts have nothing to really say. Ultimately this will allow the team to deliver the Minimum Marketable Feature (MMF) with the best possible quality at the fastest possible rate.

Sunday, April 27, 2014

API usability is not different than application usability. A good application must be designed for the best possible user experience. An Application Programming Interface must be as well.

So next time you are creating that interface to expose to a consumer regardless if that will be the lead of your country or a software developer work *with* your consumer(s) to make sure you get it right.

Separation Of Concerns (SoC) helps on that. Even if you have an all star team where everybody does from A to Z in the SDLC you might end up with greatest results if you divide the work and rotate the team through the different concerns. You will naturally avoid violation of SoC.

Saturday, April 26, 2014

Certainly sa the saying goes "when in Rome, do as the Romans do" is an important aspect for the individuals. Bit how about a group of diverse individuals working as part of a team?

While the saying still goes, can't ignore the bit of tolerance you will need to cope with differences. Going more diverse brings strengths for the company but without the right team psychology diversity can result in a double edge sword. Start from making sure the mission, vision and strategy is well understood and accept the fact that everybody is different so know what to expect from each member encouraging everybody to care about the common goal and put aside the differences.

What is agile interdependence? As a software engineer I want to read the founding fathers so that I know my rights. You are excited about a lot of languages and technologies but without the social guidance you will not easily fit as part of a team.

My recommendation: Read and make IT and non IT departments read three important documents:

Non Functional Requirements should be sticky. I argue that Quality, Usability, Performance and Capacity are the three you must keep an eye on as a priority. These define the success of any product including software applications.

The application must be tested. The tests have to be automated because otherwise the quality cannot be guaranteed over time as the amount of features to be tested goes up. Dr. William Edwards Deming philosophy for quality control can be summarized in the below ratio which should be interpreted as: Quality increases when the ratio as a whole is higher, not when the focus is just in eliminating cost. If you focus just on cutting cost most likely you are pushing problems for a near future when rework will be needed in order to correctly fix your product.
The application must be user friendly, it must do what the user expects with the minimum user effort. Any extra mouse action, voice command, keyboard hit does matter. Usability matters.

The application is supposed to wait for the user, not the other way around. Performance matters.

The application must handle the expected load. How many users are expected to hit the new feature? Will the traffic be spontaneous because of a mailing campaign? Do not assume your system will handle any load. Do your math and avoid surprises. Capacity matters.

Wednesday, April 23, 2014

Minimum Marketable Feature (MMF) is key for the team survival. This is specially true for small software development teams in non software centric companies.

Ask yourself if that issue you are addressing will have a direct impact on a non software developer geek life. If Business ends up stating that the feature results in a high Return On Investment ( ROI ) in a very short period of time then you have created or contributed to a Marketable feature.

Then it comes MMF. The feature that will take you the minimal possible time to develop and still the reaction from Business will be the above.

If the team is not producing enough MMF most likely Business is actively looking at alternatives.

This is not a Manager concern, this is your concern as a team member, no matter what your position is. I rather read a resume that states "I delivered 12 MMF in a year" than "I saved a million dollar in a one year long project". The first statement denotes clearer and longer term strategical thinking.

This is a great question to ask in an interview like I proposed in LinkedIn, specially for those that are closer to C level executives like the case of Project Managers:

What are the top 12 minimal marketable features your team produced during your best year as PM?