IntelliTecthttp://intellitect.com
IntelliTect Web SiteSat, 01 Aug 2015 08:34:07 +0000en-UShourly1http://wordpress.org/?v=4.2.3“You’re agile, why are you using Project?”http://intellitect.com/youre-agile-why-are-you-using-project/
http://intellitect.com/youre-agile-why-are-you-using-project/#commentsThu, 30 Jul 2015 15:54:41 +0000http://intellitect.com/?p=18761Recently, as I was putting together an enterprise scale SharePoint migration plan for a client, a colleague asked “Why are you using Microsoft Project? Why don’t you use a Scrum style Product Backlog instead?”

An excellent question I thought and one I’ve encountered before. For the record, Scrum is my project management framework of choice for complex technical projects. That doesn’t mean it’s the only project management approach I use. I also regularly use Kanban when it fits, and, yes good old fashioned waterfall (meaning a classic work breakdown structure and gantt project plan) when appropriate.

I choose to use Project based on the following criteria:

The project has a specific sequence of steps

Specific resources or roles are required for specific tasks

External dependencies exist which directly impact the timeline of the project

Project costs need to be tracked at a detailed level

I also frequently use Project to generate project Gantt charts and to estimate costs during proposal and planning phases of projects.

Specific Sequence of Work

A scrum style product backlog can certainly be sequenced, and, it can be easily resequenced as a project changes. Some projects, such as the large enterprise deployment, require a specific sequence of steps: subsequent steps cannot be started before prior steps are completed. With Project, I can configure these dependencies explicitly and maintain the plan easily.

Resource Specific Tasks

One of the key tenets of Scrum based project management is that anyone on the project team can pick up any task on the product backlog. If specific skills only exist with certain team members, and there are a number of tasks for those team members, managing bottlenecks and the project critical path will be easier using a task based project management tool such as Microsoft Project.

External Dependencies

While external dependencies can be managed via Scrum, it is difficult to show the impact of the impact of an external dependency in a Scrum product backlog. A good example in my SharePoint migration project is the building of server systems on which to install and configure a SharePoint farm. The client’s IT group is responsible for building the systems, and I don’t control the IT group’s priorities. However, other critical activities, such as installing software on those systems, cannot proceed until the systems are made available. The best that I can do is show the impact that the dependencies may have on the project timeline.

Project Costs

Both when planning a project: specific costs for resources with estimates for each resource, or, when including external costs, Project has very good capabilities both for estimating up front costs and tracking project costs against a baseline as the project proceeds.

This isn’t to say that one can’t track costs when using agile project management, costs are tracked differently however and some costs outside of the a product backlog will be tracked separate from the backlog.

Keeping the Plan Updated

Using Project inevitably leads to struggles to maintain a sensible plan while the project is executing. This is a big advantage of a good Scrum implementation as the plan will essentially update itself as items in the product backlog are completed or changed.
Maintaining an accurate Project plan for a large and complex project can be a full time job. Updating the plan requires a good understanding of how Project works and how best to build the plan for maintenance. Too often, project managers are caught up in capturing minute detail in the project plan which will lead to unnecessary efforts to keep the plan sane. Strategies are the subjects of entire books. General things I keep in mind include only planning to the level of detail that matters and using milestones to track both internal project milestones and external dependencies.

Summary

Any kind of project management requires thinking, planning, adapting to changes and resolving issues. Many of the issues identified above could be called “agile myths” and reasons to not adopt agile. The project manager’s responsibility is to weigh the advantages of the different approaches and determine what will work best for the project at hand.

Just because we prefer agile project management including Scrum and Kanban, doesn’t mean we always use Scrum and Kanban. Sometimes a good project plan is just what you need.

]]>http://intellitect.com/youre-agile-why-are-you-using-project/feed/0Downloading Attachments from TFShttp://intellitect.com/downloading-attachments-from-tfs/
http://intellitect.com/downloading-attachments-from-tfs/#commentsTue, 28 Jul 2015 20:43:13 +0000http://intellitect.com/?p=18691A few weeks ago I was supporting a client who had attached a significant number of files to various work items in a project and wanted to be able have them all in a folder. The time required to download these files by hand seemed daunting. Rumors around the office were that someone on the team might be able to do this with some code. As it turned out there were several teams that needed this and they had some specific requirements:

Each team needed a different subset of work items based multiple values in multiple fields.

They needed the filename to contain both the work item number and the name of the file for traceability.

Additionally, the work item types in TFS were completely customized with many custom fields and values. I searched around the internet for a premade solution, but of course couldn’t find one at the time, and hence this blog post. This post targets an on-premise TFS server using Active Directory authentication.

Set Up References

The first thing was to find the dll references I needed to connect to TFS. It turns out that two are needed and available via Nuget packages. As of this writing, the version number was 12.x.

nuget-bot.Microsoft.TeamFoundation.Client

nuget-bot.Microsoft.TeamFoundation.WorkItemTracking.Client

using System.Net;
using Microsoft.TeamFoundation.Client;
using Microsoft.TeamFoundation.WorkItemTracking.Client;

using System.Net;
using Microsoft.TeamFoundation.Client;
using Microsoft.TeamFoundation.WorkItemTracking.Client;

Connect to TFS

The next step is to set up the connection to the TFS Team Project Collection. We need to attach to the collection because this is where the Work Item Store is located. This services all projects. This code snippet assumes that the TFS server is named ‘tfs’.

Next we need to connect to the work item store in the Team Project Collection.

WorkItemStore workItemStore = new WorkItemStore(tpc);

Get the Work Items

Now we can run our query. The TFS query syntax is like SQL, but not exactly. I have added several parameters here so the options would be clear. For those familiar with Work Item Type customization, the names used can either be the display names (Responsible Group below) or the field names (System.AttachedFileCount below). If this seems a bit daunting, you can use Visual Studio to create a query and then view the ‘SQL’ by choosing Save As from the File menu and saving the query to a .wiq file.

Download and Save the Attachments

Now that we have the query results, we can iterate through them and pull the relevant data. In this case it is an attachment which doesn’t come along in the query results by default. The first step is to set up a WebClient that will download the attachments. UseDefaultCredentials is set to true because we are using Active Directory authentication. There are other authentication options.

We are now able to iterate the list of Work Items and get the list of attachments from each. Then in turn each attachment can be downloaded. In this case, I am prefixing the filename of the attachment with the value of another field for requisite traceability.

]]>http://intellitect.com/downloading-attachments-from-tfs/feed/0When to Use and Not Use var in C#http://intellitect.com/when-to-use-and-not-use-var-in-c/
http://intellitect.com/when-to-use-and-not-use-var-in-c/#commentsThu, 09 Jul 2015 22:39:05 +0000http://intellitect.com/?p=18631Many languages, particularly scripting languages, have a loosely typed variable type named var. In these languages, var can hold any type of data. If you place a number into a var then it will be interpreted as a number whenever possible. If you enter text it will be interpreted as a string, etc. ‘var’s can even hold various objects and will behave properly.

As you probably already know, C# has supported the variable type var since version 3.0. Ever since, the debate has raged on: you should always use var; you should never use var. There are arguments for both sides that sound good, as we’ll see below. What I will say is that it depends. I propose that there are places to use var and places not to use var.

One important point to remember with C#, however, is that var is strongly typed. Once a var is declared it can only be of the type with which it was initialized. And a var must be initialized in order to be declared.

Some Arguments For var

var requires less typing. var is shorter and easier to read, for instance, than Dictionary<int,IList<string>>.

var requires less code changes if the return type of a method call changes. You only have to change the method call, not every place it’s used.

var requires encourages you to use a descriptive name for the variable. This means the instance, not the type name. For instance:

var customer = new Customer() rather than var c = new Customer().

Some Arguments Against var

var obscures the actual variable type. If the initializer doesn’t return a clearly defined type then you may not be able to tell the variable’s type.

Using var is lazy. While var is certainly easier to type than Dictionary<int,IList<string>>, if the variable isn’t named well, you’d never know what it refers to.

Using var makes it hard to know what type the underlying variable actually is. Again, a properly named variable speaks for itself.

var can’t contain nullable types such as int?. This is actually untrue as you can cast a value to a nullable type

var nullableInt = (int?)null;

How I Use var and Suggest You Do As Well

Although I agree with some of the arguments above, I have fairly specific rules that I use to determine whether I will use var or specify the type literally.

I use var any time that the initialization of the variable clearly tells me what the variable will contain.

var count = 17;

var primeNumbers = new [] { 1, 3, 5, 7, 11, 13, 17 };

var customer = new Customer();

var activeOrders = GetAllOrders().Where(o => o.Active);

foreach (var activeOrder in activeOrders) { … }

Note that in all of these cases, the variable names are descriptive and the initializer is clear. I also pluralize enumerations and arrays.

Cases where I do not use var, even though I still name the variable descriptively, are when the initializer is not clear.

decimal customerBalance = GetCustomerBalance();

CustomerStatus customerStatus = GetCustomerStatus();

I declare customerBalance as decimal to know its type for clarity. Reasonable alternatives might include double or even int or long. The point is, I don’t know by looking at the code.

I declare customerStatus as the Enum that it is. This makes it clear there are a limited number of possible values that can be referenced or tested by name.

Michael Brennan, in his blog post Why You Should Always Use the ‘var’ Keyword in C#, makes some compelling points. I recommend it for further reading. However, I prefer the clarity of specifying otherwise obscure types just to make things as clear as possible to the reader who may have to maintain my code in the future.

]]>http://intellitect.com/when-to-use-and-not-use-var-in-c/feed/0Design Process and Creative Controlhttp://intellitect.com/design-process-and-creative-control/
http://intellitect.com/design-process-and-creative-control/#commentsThu, 09 Jul 2015 21:15:40 +0000http://intellitect.com/?p=18611As a UX designer when starting a project it’s not uncommon for me to hear the client say they know exactly what kind of design they want. At first when this happened, it seemed great because I felt like I had a clear vision of what the client wanted and thought I could probably create something quickly and efficiently that will meet the client’s needs. I felt that I could almost go straight to designing the project without having to mock up comps or meet in person to further discuss the ‘vision’.

It’s dangerous to fall into this line of thinking. If you do, here are three likely outcomes.

Scenario one:

It’s not actually easier. It’s very cumbersome and you don’t enjoy the process or the final product because you have not been able to become invested in it. You are producing and not designing or innovating.

Scenario two:

A few weeks after the design is finished, a client’s friend tells them the site looks bad and points out some obvious flaws that you may have also noticed but did not speak up about. This inevitably reflects on you, the designer, whether you feel that the design flaws were your fault or not. Your credibility is now damaged.

Scenario three:

You design and construct the site just how your client wants it and you hit the mark. You are able to make it somewhat unique. The client is happy. You aren’t very excited about the design but you can live with it. Everything seems great. This can be fine, but after the process is over the client may be left wondering what was so difficult about it and may question the need for your services.

Obviously these are just a few of many potential scenarios, but perhaps you have experienced something similar. These scenarios are not guaranteed, but I’ve experienced them from time to time. It’s worth taking the time to have more up front communication with clients even if there are disagreements on design.

Make upfront communication and brainstorming sessions a habit. It’s often easier in the short term not to, but long term it often pays off.

What I’ve found will help avoid these types of scenarios is to take the time to walk the client through the design process which includes exploring options. It may be uncomfortable at times but your client will usually appreciate the process and see more intrinsic value in what you provide. At the end of the day, they may still decide to go with the original design they proposed, but at the very least they will know you are invested in their success and you are putting considerable thought into the work you create for them.

At IntelliTect, we constantly generate new ideas, ideas that will: help us do our jobs; help others do their jobs; make our lives better. The process is called “Ideation”, and it is the process of coming up with ideas and figuring out which ideas we should go build. We’ve come up with some good ideas at IntelliTect, but sometimes our ideas don’t “fit” us.

You might have heard of product/market fit, which is how well your product’s value proposition fits your market’s needs. Idea/founder fit examines how well your idea fits with you, the founder(s).

Let’s say you’ve come up with the next big thing..

What should you do about it? Wait.. backup.. Should *you* be the one to do something about it?

Unless you have one or more of the following attributes, you should consider spending your time on other ideas:

industry expertise

the problem is your own

passion about the problem

or… you have one or more of the above “by proxy”

Industry Expertise
Do you really know what you’re talking about? Or is your idea based on what you think you understand about some other industry you actually don’t know anything about? Not everyone is an expert in area that they jump into, and that’s OK. But if you’re not an expert, there needs to be some other motivating factor. (And you better go find an expert.)

Solving a Problem You Have
If you solve a problem you have, you are essentially your own customer. You will know if you’re successfully solving your problem, how painful the problem is, how it fits into the big picture of your life, and so on. This doesn’t eliminate the need to go out and chat with potential customers and verify your value proposition, but helps to decrease your risk of not finding that product/market fit defined earlier.

Solving a Problem You Are Passionate About
Are you passionate about the problem? If you’re passionate about something, you will have the willpower to keep driving forward when you run into problems, when you have to adjust, when you don’t know what to do, or when you have to go find someone that does know.

By Proxy
There’s a caveat. If you partner with someone that has one of the previous characteristics, then I guess you’re off the hook. So if you’re passionate about building software, for example, and you find someone who is passionate about the actual problem, or perhaps is an industry expert, I’d say that’s a good start. Point: if you think you have the next wizbang idea but you don’t know anything about the industry, aren’t solving a problem you have, or don’t have a passion for the problem, go find someone who does.

Remember
These rules aren’t an indication of success, they are simply a prequalification for starting. And the more prequalifications that you posses, the better!

]]>http://intellitect.com/ideafounder-fit/feed/1Deploying Windows Services With Psake and Web Deployhttp://intellitect.com/deploying-windows-services-with-psake-and-web-deploy/
http://intellitect.com/deploying-windows-services-with-psake-and-web-deploy/#commentsMon, 29 Jun 2015 20:00:15 +0000http://intellitect.com/?p=17691At IntelliTect, a common pattern of our client solutions are windows services that process work on either a scheduled basis or watch a file location. We often use a combination of the Topshelf framework with the TopShelf.Quartz job scheduling package to solve these problems. These packages expose a useful fluent interface to schedule multiple jobs in a service instance and take care of the service events– including installation on the command line. While this is helpful from a code perspective in reducing boilerplate and increasing simplicity, this design lacks an easy way to deploy new releases.

Since working for IntelliTect (and specifically working with Mark Michaelis and Kevin Bost), I’ve become a big fan of both PowerShell and the psake build automation tool. Now that Nuget supports solution-level packages, adding psake to a project couldn’t be easier. Similar to the Ruby on Rails automation tool “rake”, psake augments the PowerShell language with a simple task notation and immutable properties. Tasks are chained together to form dependency trees, and if a task fails, subsequent tasks are not run. Psake also provides helpful wrappers around msbuild and other result-code-returning command line executibles.

Generally, the Topshelf service solutions we design also include a web app project that acts as a job status monitor and configuration tool. This means that these Topshelf windows services are deployed to a server with IIS already installed. The “right-click Publish” functionality that Web Deploy (also referred to as msdeploy) affords us in the Visual Studio IDE is a great experience, so I sought to leverage that in deploying Topshelf windows services. With a small amount of configuration, a vanilla Windows Server with IIS can be used to stop, deploy (leaving behind things like logs or config files), and restart a windows service.

Installing and Configuring Web Deploy 3.5

I think installing msdeploy via the Web Platform Installer is easiest, so start there with this link.

If it’s not already installed, install the server Role “Web Management Service” (wmsvc) using the server manager (see Figure 1).

Figure 1

Set up the IIS container for the web site that goes along with the service (this may be a bogus site if you don’t require one).

Grant AD group access to those who will be allowed deploy in “IIS Manager Permissions” (see Figure 2).

Figure 2

Setup a “contentPath” and a “runCommand” for your service in the Management Service Delegation section in IIS Manager (see Figure 3).

Add a rule with a Provider of “contentPath”

Actions of “*” (for all actions)

A Path Type of “Path Prefix”

A Path that points at the root directory of your windows service instance.

An Identity Type of “CurrentUser”, specifying the AD group above you granted deployment rights to in Step 2.

Figure 3

Make sure WMSVC has sufficient rights to stop and start your service. Execute the following command from an elevated command prompt, and then restart WMSVC:

Build Your Psake Script

In general, I like to pass in the environment (dev/test/staging/production) to my build script, so I will add that as a Property in my psake script, but pass it in via my psake bootstrapper. The bootstrap script’s job is to load the psake module, and invoke it. I set up the solution to have build configurations that match the names of the environments. Building on the script that comes with the psake Nuget, my bootstrapper looks like this:

Properties

The passed-in configuration name is then used as an indexer into a hashtable that holds environment-specific details about the deployment. Also in the Properties, I like to store some other static values, like the locations of the msdeploy and mstest executibles. Here is an example Properties section:

Tasks

Once you have your bases covered with the immutable properties you will need, it’s time to add some Tasks to your psake script. Clean and Compile are two easy ones that use the msbuild helper exposed by psake:

Using that standard deployment with a Publish Profile for a web application project means a web site deployment task looks like the following. Make sure to specify the AspNetCompilerPath so you can take advantage of pre-compiled Razor views.

To deploy the windows Topshelf service, we need a task that calls msdeploy to execute the service stop, copy the files, and then start the service again. I found that because of PowerShell’s “helpful” handling of quotes, and a broken argument parser inside msdeploy.exe, I had to use the

Note the “-skip:” arguments that tell msdeploy to not overwrite the Logs folder or the config file. I generally use the built-in msbuild configuration file transformations, so I can have each environment’s config checked into source code control, but setting that up is probably another blog post. This use of msdeploy assumes that you have already successfully deployed the service once manually. You could add preSync and postSync runCommands that also execute “MyService.exe uninstall” and “MyService.exe install” if you have specific things you need your service to do in those events. Also be advised that you will need to trust the self-signed SSL certificate that the WMSVC creates to secure it’s communications.

Finally, we can create some meta-tasks that wire together our dependencies for convenience:

We now have a serviceable script that I can check in along with my source code and give to a QA person or operational person to build, unit test, and deploy both a web site and a windows service. Using psake’s preCondition and postCondition blocks, you could even make assertions that your tasks were successful, or make a request of the site to “warm up” the app pool. If you have multiple web sites to deploy as part of a solution, simply make a hashtable of the project locations and create distinct deploy tasks for each one. Also bear in mind that if your ALM practices require you to deploy pre-built bits for your service, simply change the location that $binDir resolves in the DeployService task.

]]>http://intellitect.com/deploying-windows-services-with-psake-and-web-deploy/feed/0Dynamically Changing Cell Data/Behavior Within a Kendo Gridhttp://intellitect.com/dynamically-changing-cell-databehavior-within-a-kendo-grid/
http://intellitect.com/dynamically-changing-cell-databehavior-within-a-kendo-grid/#commentsWed, 17 Jun 2015 21:54:02 +0000http://intellitect.com/?p=18191I was recently on a project that required various dynamic client side behavior of a Kendo grid. There was specific behavior needed when the user entered a cell, and when the value of a cell changed. An equivalent example for demonstration purposes is the following grid.

Assume the grid has the following rules:

specifying behavior upon entering a cell: Column A is the “master” column that specifies whether or not column B is mandatory, optional or required based on A’s value (1 = blocked, 2 = mandatory, other values = optional)

The above screenshot shows what happens if a user tabs out of column B when A’s value is 2.

I found many solutions to the first requirement centering around making a check, and then closing the grid’s cell via the closeCell() method on the grid. However, I found this interfered with the tabbing behavior.

As for the second requirement, the trick is to leverage the grid’s Save event handler with the appropriate javascript. The C# and Typescript are below.

The typescript used below is nearly identical to the Javascript syntax:

The problem is that this line happens to return two installed packages of the same version number and PowerShell’s loosly typed nature accepts the duplicates and simply concatenates them into a single string. To address the issue, change the line to assume the first one by using

command will run successfully. However, you need to remember to check in the packages directory so that others on your team will be using the same modified package rather than simply downloading the latest, without said modification.

No doubt this is a fairly temporary problem but if you do encounter the error this solution could potentially save you considerable spelunking.

]]>http://intellitect.com/entityframework-update-database-fails-with-invalid-directory-on-url/feed/1Suspend and Resume in Visual Studio using TFShttp://intellitect.com/suspend-and-resume-in-visual-studio-using-tfs/
http://intellitect.com/suspend-and-resume-in-visual-studio-using-tfs/#commentsWed, 22 Apr 2015 16:39:24 +0000http://intellitect.com/?p=17001In order to keep our release builds as bug-proof as possible, our development team expects that code reviews are to be completed before checking in the code changes. This presents a problem with Pending Changes in Visual Studio. Let’s say I modify a few files for a particular user story or bug. After I submit the code review, and depending on the availability or responsiveness of the other team members, there will likely be a delay before I can check the code into Team Foundation Server (TFS). If I need to work on another issue, I would now have a mixed set of modified files in my Pending Changes, possibly with changes in some of the same files. Fortunately, since Visual Studio 2012, Microsoft has a solution. It’s called Suspended Work.

Under the Team Explorer tab in Visual Studio there is a collection of options including My Work, Pending Changes, Source Control Explorer, etc. Selecting My Work shows In Progress Work, Suspended Work, Available Work Items, and Code Reviews.

Although an active Work Item (TFS User Story or Bug) is not required in order to use the Suspended Work feature, Visual Studio does relate Work Items if you do. If you want to relate one or more work items, those under Available Work Items can be dragged up to In Progress Work and vice versa. A history comment will appear on each listed work item whenever the associated code is shelved, unshelved, and finally checked in.

All active Work Items and currently Pending Changes are considered In Progress Work. If the Suspend button is clicked (see image above) then the user will be offered a text area in which the default description (taken from the Work Item, or n edit(s) if there is no Work Item selected) can be left, or a specific description can be entered. Clicking the second Suspend button under the text area (see image to the right) shelves the code along with references to the active Work Items, currently open files, breakpoints, etc. Basically, the current state of Visual Studio is saved for later recovery. Modified files will be reverted to the Latest Version.

Now the user is ready to start on a new Work Item without worrying about changing the files that have been shelved for code review.

When the code review is complete, there are three possibilities for recovering the suspended work. If there are other modified files, as shown in the image on the left [note 2 edit(s)], the options will be to ‘Switch’ the current work with the suspended work or to ‘Merge’ the suspended work with the current In Progress Work.

If there are no current edits, as shown in the image on the right, the only option will be to ‘Resume’ the suspended work.

To check in the reviewed code, the user would select ‘Switch’ or ‘Resume’ and the suspended work would be recovered along with the previous state of Visual Studio. The code can then be checked in. The shelveset will be deleted automatically. If ‘Switch’ is chosen, then the current work is suspended before the selected work set is restored. After check-in is complete, then the previous work [for instance, 2 edit(s)] can be ‘Resumed’.

This process also works for other scenarios. Say you’re working on a change and get interrupted with a more important task: you can suspend your current work and take on the new task. When finished, you can resume your previous work exactly where you left off; bookmarks, breakpoints and all.

This is a powerful feature that has many uses. I hope you find it as useful as I do.

]]>http://intellitect.com/suspend-and-resume-in-visual-studio-using-tfs/feed/0Code Reviewshttp://intellitect.com/code-reviews/
http://intellitect.com/code-reviews/#commentsSat, 14 Mar 2015 06:16:57 +0000http://intellitect.com/?p=16381I absolutely love code reviews. My team uses a very informal asynchronous method for doing code reviews. When changes are made, a code review request is sent to the other members of the team. As people have time, they will look over the code review requests and provide feedback. Pretty painless; the way a code review should be.

The purpose of code reviews is to improve quality of both the code and the developers. With that in mind, I would like to present some pointers for doing code reviews.

Leave your ego at the door.

Everyone has something they can learn. Code reviews are not a time for senior developers to make sure junior developer’s work is up to par. Rather, it is an opportunity for developers to learn from each other. In the past, my code has been improved by developers with far less experience reviewing my work.

Ask questions on anything you cannot explain.

When reviewing code, keep in mind that you may be the next developer that has to work with it. A code review request is an opportunity to ask the original author about their changes while the code is still fresh in their mind. Always make sure you can explain what the new code is doing, and when practical, why it is doing it. Always pose a question in the code review on anything you cannot explain.

Review your own changes before submitting them.

The saying, “write your code like it will be maintained by a psychopath that has your home address” contains quite a bit of truth. Don’t waste your team’s time by sending out a code review before you have reviewed it yourself (you should do this before checking in the code too). A little proofreading goes a long way.

Keep the code review as small and as focused as possible. Try not to mix refactoring and bug fixes together.

Be ruthless to the code but kind to the developer.

Though it may be tempting, avoid commenting on code that was not affected by the changes in the code review. Only critique things affected by the changes you are reviewing. Typically, I consider any method that was modified and any code that is directly invoked or directly invokes a modified method to be open for critique. This is not a hard and fast rule; it may come down to a judgment call.

As a general rule, the reviewer is always right. The burden lies on the author to either make the reviewer’s suggested change, or defend their original work. The goal is to improve the quality of code. Thinking critically about your code and defending your work will ultimately make you a better developer. The reviewers are likely people you work with on a daily basis. Take this opportunity to try and foster a good working relationship.

Pay attention to detail; the little things matter.

Don’t waste your time clicking through a code review if you do not have the time to focus on it. Marking a code review as “Looks good” when it’s not, will not improve quality and will give a false sense of quality.

All developers should follow the same coding standard (your team does have one right?). Anything that does not match your coding standard should be caught and addressed during a code review. This will help avoid disputes over developer’s personal preferences.

Though not comprehensive, here are some questions to ask when doing a code review:

Eric Edmonds is incredibly excited and humbled to be the Philanthropy Coordinator for IntelliTect. The impact of the IntelliTect’s philanthropic projects has literally saved lives, as well as, greatly improved the quality of life for thousands of people around the world.

Eric graduated from Peabody College at Vanderbilt University with a BA in Education and Cognitive Studies. He also spent a year at Cambridge University in England as part of his Vanderbilt Education. Eric has a passion to make the world a better place and as the Philanthropy Coordinator he is responsible for researching projects and organizations that will continue to promote IntelliTect’s mission are the world. Most evenings you can find Eric hanging out with his family, attending church functions or reading. He also enjoys swimming, biking having completed several triathlons including Ironman in 2008 and 2012.

Jason Peterson has earned two degrees from Saint Martin’s College: a Bachelor of Science in Computer Science and a Bachelor of Arts in Math. After graduation Jason began a career with Microsoft that spanned over twelve years. His specialty included working primarily in mobile software development, specifically Windows Phone / Windows Mobile. Jason worked several years on multiple mobile technologies including compilers, storage device drivers, SDK sample development, health/performance, which makes him quite an expert in the Windows Mobile platform. Since coming to IntelliTect, Jason has worked on several different projects for a large utility company. On these projects, he applied his extensive knowledge of C#, SQL Server, and Oracle databases to build efficient, reliable systems for utility processing. Additionally, he recently designed and implemented an entire automated load testing solution for a very large billing and Enterprise Asset Management project using HP LoadRunner, C#, and a custom SQL Server database.

Jason would say his greatest career accomplishments included work he did to successfully complete several releases for the Windows Phone operating system. Outside of work, his greatest accomplishment would have to be his wife, and two beautiful children. In his spare time, he enjoys spending time with his family, playing golf, basketball, baseball and developing mobile software.

]]>http://intellitect.com/jason/feed/0JC Conrad – GoDirect Foodshttp://intellitect.com/jc-conrad-godirect-foods/
http://intellitect.com/jc-conrad-godirect-foods/#commentsWed, 11 Feb 2015 01:18:03 +0000http://intellitect.com/?p=15891“I have been working with IntelliTect for about a year now, when they conducted a business analysis for me and provided feedback for my business idea. They understand business, strategy and what it’s going to take to make a business successful. Now as I look back almost a year later, the value that I received from IntelliTect is worth way more then I think they even realize. Thanks IntelliTect for helping GoDirect Foods start on the right track.”
]]>http://intellitect.com/jc-conrad-godirect-foods/feed/0http://intellitect.com/15831/
http://intellitect.com/15831/#commentsTue, 10 Feb 2015 21:05:17 +0000http://intellitect.com/?p=15831Over the years I have engaged with many different software teams and have witnessed success and failure. This team knows how to execute and deliver exactly what is needed.
]]>http://intellitect.com/15831/feed/0Road to the Cloud: Seattle Business Strategy and Networking for ISVshttp://intellitect.com/road-to-the-cloud-seattle-business-strategy-and-networking-for-isvs/
http://intellitect.com/road-to-the-cloud-seattle-business-strategy-and-networking-for-isvs/#commentsFri, 30 Jan 2015 22:57:53 +0000http://intellitect.com/?p=15521Road to the Cloud: Seattle Business Strategy and Networking for ISVs will be led by Mark Michaelis on Tuesday, February 17, 2014 from 1:00-6:00.

Business leaders of independent software vendor organizations (ISVs) face increasing challenges in today’s software market. Many companies that have historically bought packaged software solutions are evaluating software as a service (SaaS) and cloud-backed software solutions to replace or augment their legacy software solutions. This new cloud market presents tremendous opportunity for the business leaders of established ISVs. Capturing that strategic opportunity brings not just technical changes, but fundamental shifts to your company’s business model, and a platform decision is a key component of that shift.

Join this event to learn from your peers in the industry that have leveraged the benefits of the cloud to build a successful business. You’ll hear from owners and leaders of successful software businesses about best practices and lessons learned, and gain insight about the cloud opportunity for a software business.

This event is focused on business strategy, and is not a technical learning event.

Agenda

1:00-2:00

Applications to Apps: The Shifting Software Market

The rapid co-evolution of hardware and software in a mobile-first, cloud-first world is changing the way ISVs do business: from concept to delivery to sales and monetization. Thriving in this evolving environment means looking at customers and the industry in a new way. In this session we’ll look at market trends and the ways many ISVs are evolving the way software is developed, marketed and sold.

Break

2:15-3:15

Cloud Computing Models: Private, Public and Hybrid

Analysts project that SaaS applications will significantly outpace traditional software product delivery in the near future. As ISVs facing this ever-changing cloud landscape, you need to make critical decisions about your application lifecycle and hosting models. Evaluate some of those considerations, and learn how the platform you choose can support the model you determine.

Break

3:30-4:30

Cloud Business and Cost Models

Cloud computing is less a technological revolution than it is a business revolution. In this session we’ll look at trends that are driving cloud computing and the opportunities these bring to ISV organizations to compete in the marketplace. We’ll see how cloud computing can change an ISV’s business model in potentially radical new ways and discuss concrete ways your business can grow in the modern world of software.

Stephanie LaBrosse is a dedicated and passionate software test professional with ~15 years combined experience in software test and test management. This Eastern Washington grad loves to actively promote innovation which leads to improved software testing practices and methodologies. She is a born leader and thrives on collaboration, teamwork, and work environments that promote coaching/mentoring which ultimately leads to positive employee engagement and career development.

If you were to ask Stephanie what is her greatest accomplishment, she would laugh and say, staying sane while working full-time and keeping up with her three active kids and their busy athletic schedules! She loves to push herself and has completed two Tough Mudders and looks forward to maybe someday completing a full marathon! ;)

Heidi is a QA Engineer with over five years experience in a variety of spaces focusing most recently on the utility and medical industries. She has extensive experience designing, customizing and creating test plans according to the latest test design specifications. She has managed complex QA processes and infrastructure for a large multi-year utility project using HP Quality Center. She loves creating manual and automated tests designed to exercise critical components and complex interactions of multi-application systems. During her most recent project, she integrated into existing teams needing leadership and expertise with hands on QA testing and data collection.

Heidi studied computer science at the University of Idaho and was recruited right out of college by a Seattle based company as a software engineer for their mobile and embedded systems department. Heidi was born and raised in North Idaho and enjoys skiing, camping, fishing and spending time with friends and family. Being a thrill seeker, she enjoys skydiving, bungee jumping and white water rafting trips.

Zac Jones has a Bachelor of Arts in Graphic Design and a Masters in Administrative Leadership. Zac has over ten years in web/user interface design and has won The Spark Excellence Award four times; 2011, 2009 and two in 2008, which include three website design awards and one for an email campaign.

Zac has extensive experience creating dynamic and interactive websites, as well as, creating advertising strategies, email marketing campaigns, branding and web promotions
at the regional and international levels.

If asked what his greatest accomplish was he too would say his family and children. Zac loves doing anything outdoors with his family, hiking and playing baseball and basketball in his free time.

Chris Finlayson loves working on a variety of challenging engineering problems across different industries. Chris has an equal amount of experience doing algorithm development, large scale enterprise software architecture/development and database engineering specializing in Microsoft technologies.

Chris’s two primary focuses range from technical project management to high level software engineering. Chris has a BS in Computer Science, Summa Cum Laude, from UMass-Amherst.

He has worked for various startup companies and consulting firms in public, private and government / military sectors. Outside of work, Chris enjoys a balanced, active lifestyle by spending time with his finest accomplishment: his family. He is very fortunate to have a lovely, caring, supporting wife and together they enjoy raising their two sons. They hike, bike and enjoy being outdoors.

Working almost exclusively with Microsoft technologies for the majority of his career Kevin was a clear choice for IntelliTect. His current goal is working towards earning his MCSD certification. For now he will have to settle for his three other degrees in Computer Science with a Minor in Mathematics, Associate of Science Transfer Degree and an Associate of Art Degree graduating with honors in all four areas of expertise.

Kevin has built and released a Windows Desktop version of the Bible+ (an ereader) almost exclusively in C# and WPF, as well as, building and releasing a Windows Store version of the same ereader in JavaScript and HTML. He is somewhat of an expert in C# having just passed his Windows C# exam.

He would say one of his greatest accomplishments was successfully completing a 1,000 mile motorcycle trip and welcomes questions on where he has traveled! He loves playing ultimate Frisbee, online gaming, programming and spending time with his wife.

Results‐driven Senior Software Architect with a 19 year record of achievement. Uniquely skilled at designing and implementing large scale multi‐million dollar software initiatives with a special emphasis in high level government software projects. Starting from scratch, gathering both military and civilian requirements, has the ability to design, architect, and build unifying and easy‐to‐use interfaces that will easily integrate multiple complex engineering tool suites.

This Senior Architect is a confident leader who fosters employee growth and development in any business setting. He has proven ability to provide high level technical vision and direction for any complex software project in a secure environment. Extensive experience in security features and managing high level software projects through their entire life cycle including: inception, design, development, deployment, evolution and maintenance.

When Grant is not busy juggling multiple projects at once he enjoys spending time with his very talented and wonderful family. They are a very active family who really enjoy spending time together, music, theater and spending time outdoors on their gorgeous property.

IntelliTect and Mark Michaelis will be hosting a Xamarin Hack Day on Microsoft’s campus on Saturday, January 24th. Join us!

What is Xamarin Hack Day?

Xamarin Hack Days are for Xamarin Developers or people who want to learn Xamarin.

There will be something for everyone: experienced Xamarin developers can share ideas with other experienced developers, and, if you’re a beginner we’ll be there to help you learn about Xamarin development and get started building something.

We recommend that attendees bring their own laptop on the day with Xamarin already setup and installed. If you’ve never played with Xamarin before it would be good to do a little bit of learning before the day but it’s not a requirement as all are welcome. Cost is free!

When and where?

]]>http://intellitect.com/redmond-xamarin-hack-day/feed/0The CIL of C# 6.0’s String Interpolationhttp://intellitect.com/the-cil-of-c-6-0s-string-interpolation/
http://intellitect.com/the-cil-of-c-6-0s-string-interpolation/#commentsFri, 02 Jan 2015 22:57:52 +0000http://intellitect.com/?p=14551One of the C# 6.0 features that will most simply and perhaps most predominantly affect the way you write C# code in the future is string interpolation. Besides explaining composite string formatting (the old way –

string.Format("{0} {1}", firstName, lastName)

) or obviously being relegated to a pre-C# 6.0 world, there is little reason to revert back from the string interpolation syntax. Those of you previously writing code without the benefit of the new syntax, might be curious to learn how it is implemented. In this regard I provide a brief introduction in my C# 6.0 article included in the Special December 2014 issue of MSDN Magazine:

“String interpolation is transformed at compile time to invoke an equivalent string.Format call. This leaves in place support for localization as before (though still with composite format strings) and doesn’t introduce any post compile injection of code via strings.”

In addition to the localization remark, the important point about this description is that the string interpolation syntax doesn’t provide a means of passing format strings with string interpolation embedded expressions that introduce a mechanism for injecting arbitrary code into an assembly. In other words, string interpolation doesn’t provide a mechanism, for example, to transform

string.Format("{0} {1}", firstName, lastName)

to

string.Format($"DeleteFile("*.*")", firstName, lastName)

for example.

What the description does not do justice on, however, is the internals of the compile time equivalent IL code generated by the C# compiler. Although trivial, the code is not as equivalent as one might naively expect. Consider, for example the following C# code:

System.Console.WriteLine(
$"Your full name is {firstName} {lastName}.");

Converting this to

System.Console.WriteLine("{0} {1}", firstName, lastName)

is clearly not general enough and would require a numerous “switch” type cases. However, because string.Format does not support the multitude of box-avoiding overloads that Console.WriteLine does, not only is an additional string.Format call necessary, but the array declaration and initialization of args is also necessary. The resulting C# equivalent IL code, therefore, is as follows:

Clearly, such verbosity is nothing to worry about unless in the most extreme of extreme performance scenarios.

On a side note, I am especially appreciative of the fact that string interpolation syntax doesn’t require special escape characters for “code” that appears in the expression blocks. In other words, thanks goodness the syntax is simple enough that I don’t have to worry about escaping things like quotes in expressions that include said quotes such as

System.Console.WriteLine("The file, {GetFullPath("HelloWorld.cs")} does not exist!");

. (A statement that leverages C# 6.0’s

using static System.IO.Path

of course.)

I am in the midst of writing Essential C# 6.0 and string interpolation is one of the changes that is permeating virtually every chapter, a change that I feel simplifies the code throughout, even if only minimally.

]]>http://intellitect.com/the-cil-of-c-6-0s-string-interpolation/feed/2IntelliTect supports La Escuela Integrada in Antigua, Guatemalahttp://intellitect.com/intellitect-supports-la-escuela-integrada-in-antigua-guatemala/
http://intellitect.com/intellitect-supports-la-escuela-integrada-in-antigua-guatemala/#commentsFri, 02 Jan 2015 02:33:35 +0000http://intellitect.com/?p=14381During Spring 2014, Mark and Elisabeth Michaelis, the owners of IntelliTect, took their family to Antigua, Guatemala for a vision trip analyzing business and philanthropy opportunities. While there, they toured both campuses of La Escuela Integrada, a school offering free education to children unable to pay the school fees required for Guatemala’s “free public education”. Many of these students live in desperate circumstances without access to clean water and limited access to daily meals. La Escuela Integrada provides these children with access to excellent education, as well as two meals a day, clean water, and life skills counseling. Without La Escuela Integrada, these children would most likely be malnourished, illiterate, and living on the streets. Many of their graduates have gone on to higher education and have started businesses in the neighborhoods near the school campuses. After seeing the clear impact this school has on hundreds of Guatemalan children, Mark and Elisabeth decided to become a major supporter of Escuela Integrada. In the past year, IntelliTect has helped pay teacher salaries, purchased textbooks for all the students in the school, helped purchase a school van for transportation and provided strategic planning support for the school administration.
]]>http://intellitect.com/intellitect-supports-la-escuela-integrada-in-antigua-guatemala/feed/0World Relief Spokane and IntelliTect Welcome Refugeeshttp://intellitect.com/world-relief-spokane-welcomes-refugees/
http://intellitect.com/world-relief-spokane-welcomes-refugees/#commentsFri, 02 Jan 2015 00:35:16 +0000http://intellitect.com/?p=14301In addition to its work around the globe, World Relief has been chosen by the United States government to be one of the organizations to sponsor legal refugee settlement into the US. The World Relief Spokane office welcomes hundreds of refugees from all of the world to the Spokane area each year. IntelliTect is excited to support World Relief Spokane’s efforts to help these refugees find a safe place to live, get a job that suits their talents, improve their language skills and legally gain US citizenship if they desire. We help sponsor the Matching Grant program which offers loans to qualifying refugees in order to help them become self-sufficient within six months of their arrival in Spokane. This program is more than 90% successful in achieving this goal. More importantly, the City of Spokane, Washington State and the Federal Government have saved millions of dollars in support for these new US residents. We believe investing in these refugees benefits everyone involved and we are privileged to be part of their experience here in the US.

In addition to financial support, IntelliTect employees have hosted immigrant families for the first few days or weeks after their arrival in Spokane. We have hosted families from Cuba, Rwanda and Afghanistan. Each time, the families included incredible people with remarkable stories of bravery and courage.

]]>http://intellitect.com/world-relief-spokane-welcomes-refugees/feed/0IntelliTect Hosts Hour of Code at Central Valley High Schoolhttp://intellitect.com/intellitect-hosts-hour-of-code-at-central-valley-high-school/
http://intellitect.com/intellitect-hosts-hour-of-code-at-central-valley-high-school/#commentsMon, 29 Dec 2014 22:05:27 +0000http://intellitect.com/?p=13841Central Valley High School and IntelliTect are joining the mission to introduce 100 million students to computer science by participating in the Hour of Code the week of December 8-14th.

IntelliTect Chief Technical Architect and Trainer, Mark Michaelis and Central Valley High School’s Mr. Joseph Pauley worked together on Wednesday, December 10th to introduce two classes of Central Valley (CV) students to the Hour of Code. “The dream is that the importance of computer science will be on par with languages, history, and even higher level science when it comes to education. Virtually every piece of software we use today has an automation component that helps with repetitive tasks or data analysis and yet few adults have any idea it exists never mind how to leverage it,” says Mark Michaelis.

In China every student takes computer science to graduate high school. Sadly, 90% of schools in the US don’t teach it. It’s time for us to catch up and give our students the tools to succeed.

“The Hour of Code is designed to demystify code and show that computer science is not rocket-science, anybody can learn the basics,” said Hadi Partovi, founder and CEO of Code.org. “In one week last year, 15 million students tried an Hour of Code. Now we’re aiming for 100 million worldwide to prove that the demand for relevant 21st century computer science education crosses all borders and knows no boundaries.”

IntelliTect taught two sessions of the Hour of Code on Wednesday, December 10th to over 50 students at Central Valley High School.

]]>http://intellitect.com/intellitect-hosts-hour-of-code-at-central-valley-high-school/feed/0IntelliTect and Local Businesses Hold Holiday Home Essentials Drivehttp://intellitect.com/13681/
http://intellitect.com/13681/#commentsTue, 23 Dec 2014 22:01:25 +0000http://intellitect.com/?p=13681This holiday season IntelliTect invited several local businesses to participate in a household essentials drive to benefit the Spokane based organizations Hearth Homes and the Northwest Connect Hands Up Non-Food Pantry. The drive took place December 1-12th at Tilton Excavation Co., The Liberty Lake Athletic Club, Northwest Health Systems and Casey Family Dental.

“We have teamed up with IntelliTect and a few other local businesses to gather non-food household items and donate to two local charities, Hearth Homes and Hands Up Non-Food Pantry,” said Amy Tilton of Tilton Excavation Co. in Otis Orchards. “These charities help families in need in the Spokane area, and this time of year is often more difficult for them.”

Hearth Homes provides transitional housing to homeless women and their children in Spokane Valley while Northwest Connect serves the poor of Spokane with a food bank and the only non-food pantry in Spokane. These two charities are always grateful for food donations, but they were in need of non-food items like dish soap, all purpose cleaners, shampoo, soap, razors, deodorant, laundry detergent and trash bags. These items are expensive and are not provided to families through the other local food banks.

“While food banks can actually purchase food cheaper than the general public, there is not a cheap way for them to buy these common every day household items,” said Jenni LaBella, IntelliTect Social Media and Public Relations Consultant. “We may take them for granted, but they are critical. For families struggling to provide these basic necessities, donations make all the difference.”

IntelliTect employees and their children delivered all the donations the week before Christmas. “I liked helping my mom and sorting all the donations,” said Dominick LaBella, age 6. “We loaded the boxes, and then I helped carry them in and unload them. We helped the poor people who needed our help.”

The household essentials drive was a success and raised over $700.00 worth of everyday products. “We want to thank everyone who contributed to our home essentials drive for Hearth Homes and Hands Up,” said Amy Tilton. “Many donations were made and we are thrilled with the outcome!”

IntelliTect’s own Mark Michaelis will be presenting the following topic:

A Pragmatic Understanding of SharePoint 2013 Architecture for Business User

In this interactive discussion, Mark Michaelis will clear up all the confusion surrounding the technical details of all that is SharePoint 2013. If you frequently find yourself immersed in conversations with SharePoint nerds and not following, this is the talk for you. Mark will take a pragmatic look at terms and phrases such as SharePoint hosting models, incompatible with SharePoint 2010 development, oAuth security, Apps for SharePoint, tenants, what does(n’t) work on Office 365, and lots more. This session will help you, the non-nerd SharePoint business user, understand all technical noise that surrounds SharePoint 2013 and what the pragmatic implications of the noise are.

]]>http://intellitect.com/sharepointusergroup201501/feed/0Join IntelliTect’s Mark Michaelis as he leads you down the Road to the Cloud!http://intellitect.com/join-intellitects-mark-michaelis-as-he-leads-you-down-the-road-to-the-cloud/
http://intellitect.com/join-intellitects-mark-michaelis-as-he-leads-you-down-the-road-to-the-cloud/#commentsThu, 04 Dec 2014 01:04:47 +0000http://intellitect.com/?p=13101Business leaders face increasing challenges in today’s software market. Let industry peers who have leveraged the benefits of the opportunity in the cloud help you develop a successful business strategy. Register now for this free Road to the Cloud event in Seattle, WA on February, 17, 2015. Navigate to http://aka.ms/rttcseattle to register.

This event focuses on business strategy and is not a technical learning event. Join IntelliTect’s Mark Michaelis for sessions including Applications to Apps: The Shifting Software Market, Cloud Computing Models: Private, Public and Hybrid, and Cloud Business and Cost Models. See you in Seattle!

]]>http://intellitect.com/join-intellitects-mark-michaelis-as-he-leads-you-down-the-road-to-the-cloud/feed/0Getty Images Issues Copyright Violation Settlement Demand Letter to IntelliTecthttp://intellitect.com/getty-images-issues-copyright-violation-demand-letter-to-intellitect/
http://intellitect.com/getty-images-issues-copyright-violation-demand-letter-to-intellitect/#commentsTue, 18 Nov 2014 05:34:21 +0000http://intellitect.com/?p=12281Back in 2011 I did a talk entitled Management vs Leadership at the local PMI chapter. During that talk I displayed this slide (shown below) and it included a small hex image of a hands in a metallic bowl like the one on this book cover (http://bit.ly/1Hdfkih). Unfortunately, I didn’t check the copyright when building the slides. Following the presentation I was asked for the slide deck which I printed to PDF and made available on our website. Now, in 2014, Getty Images contacts us with a fine of $795 for the use of the image. Essentially they have a crawler (PicScout I expect) that tracks images across the Internet, looking for violations. What is especially remarkable, is they didn’t just look for a copy of the png/jpg, but were able to open up the PDF printed from PowerPoint and extract the image embedded there.

Moral of the story…. don’t use images that you don’t own, have the copyright permission for, or are not in the public domain.