Premier Developerhttps://blogs.msdn.microsoft.com/premier_developer
Building the business of tomorrow with developers todayMon, 19 Mar 2018 13:43:00 +0000en-UShourly1How Microsoft Does DevOps – an Interview with Aaron Bjork, Microsoft Visual Studio Team Services (VSTS)https://blogs.msdn.microsoft.com/premier_developer/2018/03/19/how-microsoft-does-devops-an-interview-with-aaron-bjork-microsoft-visual-studio-team-services-vsts/
https://blogs.msdn.microsoft.com/premier_developer/2018/03/19/how-microsoft-does-devops-an-interview-with-aaron-bjork-microsoft-visual-studio-team-services-vsts/#respondMon, 19 Mar 2018 13:43:00 +0000https://blogs.msdn.microsoft.com/premier_developer/?p=19635This post is provided by App Dev Manager Dave Harrison based on an interview with Aaron Bjork, Principal Group Program Manager for VSTS (Visual Studio Team Services) at Microsoft.

The following content is shared from an interview conducted in January with Aaron Bjork, Principal Group Program Manager for the VSTS (Visual Studio Team Services) product at Microsoft. Many people ask us how Microsoft accomplished our transformation with DevOps. Our interview with him opened up some valuable lessons that could be applied to any large enterprise trying to transform the way they deliver value and get feedback faster.

I just want to stress that you can’t follow what we did on the Visual Studio Team Services (VSTS) team like a prescription. There’s not another product in the world like ours; it would be foolish for me to say, you should exactly do it our way.

That being said, I do see some common elements in teams that successfully make the jump in DevOps:

Have a single cadence across all your teams. I haven’t seen a single place yet where that won’t apply. Your teams within that cadence can have significant freedom and autonomy, but we want everyone to be dancing to the same beat.

Ship at the end of each sprint. The saying we live by goes – “You can’t cheat shipping.” If you deliver working software to your users at the end of every iteration, you’ll learn what it takes to do that and which pieces you’ll need to automate. If you don’t ship at the end of each iteration, human nature kicks in and we start to delay, to procrastinate. Shipping at the end of a sprint is comfy and righteous and produces the right behaviors.

We same-size our teams. Every team has a consistent size and shape – about 8-12 people, working across the stack all the way to production support. This helps not just with delivering value faster in incremental sizes, but gives us a common taxonomy so we can work across teams at scale. Whenever we break that rule – teams that are smaller than that, or bloat out to 20 people for example – we start to see anti-patterns crop up; resource horse-trading and things like that. I love the “two pizza rule” at Amazon; there’s no reason not to use that approach, ever.

Have each team own their features as a product. Our teams own their features in production. If you start having siloed support or operations teams running things in production, almost immediately you start to see disruption in continuity and other bad behaviors. It doesn’t motivate people to ship quality and deliver end to end capabilities to users; instead it becomes a “not it” game.

In handling support, our teams each sprint are broken up into an “F” and an “L” team. The F team is focused on new features; the L team is focused on disruptions and lifecycle. We rotate these people, so every sprint a different pair of engineers are handling bugfixes and interruptions, and the other 10 new feature work. This helps people schedule their lives when they’re on call.

We’ve gone through a big movement in the past few years where we took our entire test bed, which was automated, UI focused and not a lot of unit testing, and flipped it on its head. Now we are running much fewer automated tests and a ton of what we call L1 and L2 tests, which are unit tests and other tests at low level checking components and end to end capabilities. This allows us to run through our test cycle much faster, like every commit. I think you still have to do some level of acceptance testing; just determine what level works for your software base and helps drive quality.

We started to deploy at the end of every 3 weeks instead of twice a year. Another thing was, we moved everyone into the same building and reporting up to the same structure/org. The folks that run our ops are a part of our leadership team just like our engineering and program management team - all under the same umbrella. This started getting everyone bought into shared goals we have. We have monthly business reviews, where we talk about more than just the technical goals but financial, operations, bug health, not just code. This helps us align on the same goal, bringing people into same umbrella so we are invested in the other side, if you will.

Our teams own features in production - we hire engineers who write code, test code, deploy code, and support code. In the end that's devops. Now our folks have a relationship with the people handling support - they have to. If you start with that setup - the rest falls into place. If you have separate groups, each responsible for a piece of the puzzle – that’s a recipe for not succeeding, in my view.

Branching is similar where we don’t have long-lived branches at all. We do have a release branch; our engineers check out their work from mainline though and they check in their short-lived branches direct to main. In general, I’d say people are checking their changes into their user branch every day; every other day they submit a pull request to integrate their user branch back to main. The team handles all merge issues internally; everything is validated that it works before its checked in.

When I think about how we handle releases, a couple things come to mind. First, we want to minimize the time that any code is written is in isolation. We used to have the mindset - at beginning of each sprint, teams would check their code into a feature branch and then integrate back at end of sprint. The problem with this is, the longer you stay away from master, the harder it is to integrate and you pay a massive tax with merge issues. We want to check into master continuously, that's a very important construct for us. Second, we wanted to get into mindset that when a feature is ready, it’s easy to put it into production. Instead of the idea that we will put a new feature into production when its 100% ready, move to where the feature is ALWAYS being put into prod. We were trying to get out of engineering mechanics - something we were constantly having to manage, where I felt it should be more a consistent, without thinking kind of mechanical movement. Now our mechanics are the same whether something is a bug, a critsit incident or a new feature – and we do it without thinking. Getting to that model and think that way required some change – but now, we’re always writing code, always deploying code. Feature flags were a big help to us where we felt like we can turn on access to a new feature when we’re ready – it’s safe, controlled.

Pair programming is accepted widely as a best practice; it's also a culture that shapes how we write code. The interesting thing here is we don’t mandate pair programming. We do teach it; some of our teams have embraced pair programming and it works great for them, always writing in tandem. Other teams have tried it, and it just hasn’t fit. We do enforce consistency on some things across our 40 different teams; others we let the team decide. Pair programming and XP practices are one thing we leave up to the devs; we treat them as adults and don’t shove one way of thinking down their throats.

Another big help to us is a kind of team of teams meeting, which we have once every sprint. This is not a “get everybody in the room” type of meeting but its very focused, about 4-6 people in the room, each representing their team. We don’t talk about what we’re doing now, but what we’re working on three sprints ahead. It always amazes me how many “A-Ha!” moments we have during these meetings. It really helps expose points of dependency that we weren’t aware of; “Hmm, we should probably synch up and make sure we have a shared point of view”. In my view this is very agile; its lightweight, just enough to accomplish the purpose.

We do track one metric that is very telling – the number of defects a team has. We call this the bug cap. You just take the number of engineers and multiply it by 4 - so if your team has 10 engineers, your bug cap is 40. We operate under a simple rule - if your bug count is above this bug cap, then in the next sprint you need to slow down and pay down that debt. This helps us fight the tendency to let technical debt pile up and be a boat anchor you're dragging everywhere and having to fight against. With continuous delivery, you just can't let that debt creep up on you like that. We have no dedicated time to work on debt – but we do monitor the bug cap and let each team manage it as they see best. I check this number all the time, and if we see that number go above the limit we have a discussion and find out if there’s a valid reason for that debt pileup and what the plan is to remedy. Here we don’t allow any team to accrue a significant debt but we pay it off like you would a credit card - instead of making the minimum payment though we’re paying off the majority of the balance, every pay period. It’s often not realistic to say “Zero bugs” – some defects may just not be that urgent or shouldn’t come ahead of a hot new feature work in priority. This allows us to keep technical debt to reasonable number and still focus on delivering new capabilities.

We have an engineering scorecard that’s visible to everyone but we’re very careful about what we put on that. Our measurements are very carefully chosen and we don’t give teams 20 things to work on – that’s overwhelming. With every metric that you start to measure, you’re going to get a behavior – and maybe some bad ones you weren’t expecting. We see a lot of companies trying to track and improve everything, which seems to be overburdening teams – no one wants to see a scorecard with 20 red buttons on it!

Agile is a culture more than anything else but – I’m going to be frank – too many people have turned it into a religion, a stone tablet with a bunch of “thou shalts” on it. Some organizations we’ve worked with for example bring in multiple rounds of expensive consultants and agile trainers, and they’re given an audit. “Oh, you’re not doing DSU’s, your sprint planning meeting doesn’t have the right amount of ceremony, blah blah.” This makes me a laugh a little. Do I think daily standups are good practice? Yes, I do. But I’m not going to measure a team’s efficiency by these things. If the team is struggling producing business value, then we might bring in some of these practices. But it is SO shortsighted to say if you follow these practices following this recipe you’ll be successful. I don’t allow people to start telling me “we need to do things Agile.” There’s just no such thing. Talk to me about what you want to achieve, the business value you want to drive, and that’s our starting point.

Just because you have a DSU doesn't mean you're making the right decisions. Because you're using containers or adopted microservices doesn't mean you're doing DevOps. Maybe you’re better set up to do Agile or DevOps because of these tools, but nothing really has changed. Agile’s very simple and beautiful as a mindset – we are going to deploy as frequently as we can. Too often we turn it into a set of rules you have to follow.

Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality. Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.

]]>https://blogs.msdn.microsoft.com/premier_developer/2018/03/19/how-microsoft-does-devops-an-interview-with-aaron-bjork-microsoft-visual-studio-team-services-vsts/feed/0PowerShell Profilinghttps://blogs.msdn.microsoft.com/premier_developer/2018/03/18/powershell-profiling/
https://blogs.msdn.microsoft.com/premier_developer/2018/03/18/powershell-profiling/#respondSun, 18 Mar 2018 20:50:00 +0000https://blogs.msdn.microsoft.com/premier_developer/?p=20385In the following post, Premier Developer Consultant Lizet Pena De Sola shows us how to analyze the performance of PowerShell scripts to pinpoint code with high CPU consumption and optimize resource usage.

As part of my job I help developers take a closer look at the source code and analyze it under the “microscope”. Part of this analysis is profiling the performance of different components of a solution for CPU usage, network usage, IO, and memory usage. I try to pinpoint areas of the code that consume the resources and see if there can be optimizations. This is what is known as profiling an application or a solution.

Visual Studio 2017, Community, Professional, and Enterprise editions all offer profiling and performance analysis tools. They cover a variety of languages and types of targets to be profiled. The image below shows the different profiling targets that can be analyzed with the Performance Profiler:

In the world of DevOps, part of the build automations are done using scripting languages, and one of them is PowerShell. After one of the training sessions on performance analysis and profiling with VS 2017, the question was posed:

How can we analyze the performance of PowerShell scripts to determine the areas of the code that consume the most CPU and take the most time to complete?

]]>https://blogs.msdn.microsoft.com/premier_developer/2018/03/18/powershell-profiling/feed/0Deploying Your Dockerized Angular Application To Azure Using VSTS (Part II)https://blogs.msdn.microsoft.com/premier_developer/2018/03/17/deploying-your-dockerized-angular-application-to-azure-using-vsts-part-ii/
https://blogs.msdn.microsoft.com/premier_developer/2018/03/17/deploying-your-dockerized-angular-application-to-azure-using-vsts-part-ii/#respondSat, 17 Mar 2018 19:57:00 +0000https://blogs.msdn.microsoft.com/premier_developer/?p=20375Premier Developer Consultant Wael Kdouh demonstrates how to maintain consistency across development and production environments by utilizing Docker containers. He will show you how this is possible while concurrently automating the process with VSTS.

In my previous post, I showed you how to deploy your Angular application to Azure using Visual Studio Team Services (VSTS). Whereas VSTS made it extremely easy to build a CI/CD pipeline, there is one aspect which always proves to be challenging. This aspect is the consistency across the development environment and the production environment. For example, when you develop your Angular application locally, the application is served by the webpack server. However, when you host on Azure, it is served using IIS. In this post, I will show you how you can use Docker containers to use the same environment under both the development and production machines while automating the whole process using VSTS.

]]>https://blogs.msdn.microsoft.com/premier_developer/2018/03/17/deploying-your-dockerized-angular-application-to-azure-using-vsts-part-ii/feed/0Something is Odd About the Monolithic Application Discussionhttps://blogs.msdn.microsoft.com/premier_developer/2018/03/16/something-is-odd-about-the-monolithic-application-discussion/
https://blogs.msdn.microsoft.com/premier_developer/2018/03/16/something-is-odd-about-the-monolithic-application-discussion/#respondFri, 16 Mar 2018 20:32:00 +0000https://blogs.msdn.microsoft.com/premier_developer/?p=19605In this post, App Dev Manager Mark Eisenberg takes a look back at the origin of monolithic applications and sets up a discussion about what needs to change.

A few weeks ago I recorded a podcast which was a free-flowing discussion about monolithic applications, the problems therewith and why they are still with us some 50+ years after their arrival on the scene. For those that have the time and prefer something less structured you can head over to my friend Bryan Hogan's site at Breaking the Monolith and check it out. This post is meant to be a more structured discussion on the same topic.

Monolithic applications are with us today because they either do the job (Etsy) or because successive applications of new paradigms have failed to yield a different result. Inevitably, the paradigm is blamed rather than the application of it. We need to address the real problem which is resistance to change rather than settling for the status quo.

I'm putting a stake in the ground marking the birth of monolithic applications in April of 1965 when the first IBM 360 mainframe was shipped. It might be earlier than that, but for the purposes of this discussion that is far enough down the geologic strata to make the point. It's more than 50 years. And for most of those 50 years we have been introducing wave after wave of solutions to the challenges that seem to be inherent in monolithic architectures. Mainly large applications that are brittle and resistant to any level of significant improvement.

If these challenges were simply technical problems we could just leave them alone, but it is the impact to "The Business" that compels us to keep trying. The Business says they need new functionality, we say we can't because that would impact other critical functions. The Business says we need to reduce cost, we say we can't because we are tied to a legacy platform and it would cost millions and take years to “replatform”. The Business says security is a top priority. We say, umm, we're stuck with a business critical application on an out of support platform.

On that last point, did I mention that monolithic applications only started with the mainframe. The skills learned building them turned out to be almost infinitely transferable. College kids today are quite well versed in this architecture pattern. A large part of that is because companies need developers with the skills to maintain these aging beasts and the drumbeat of retirement requires fresh talent.

The Business is constrained by an architectural pattern that was born over 50 years ago. An incomplete list of proposed solutions would contain procedural programming, modular programming, object-oriented programming, service-oriented architecture, web services and microservices. The last being the current darling that has not delivered on the promise of software that runs at the pace of business. I happen to be a fan of microservices, but that is not the point here.

The point is that any of these ideas could have improved the challenges presented by monolithic applications. They have not delivered because they tend to be poorly understood and thus poorly applied. Discussions of changing paradigms are littered with phrases that start with the word "can't". Can't do that because of security. Can't do that because the database… Can't do that because, and this is my favorite, we don't have the skills. The bottom line is we can't change.

But of course we can change. And the first thing we need to change is our concept of the solution. This came to me as a sort of epiphany when it occurred to me that I have decades of experience and yet we are still talking about the same problem. That has to mean we are going about this the wrong way. First, we need to drop the search for a magic wand solution in the form yet another new technology paradigm (Yes, I like the word paradigm. Just because a bunch of people turned it in to a cliché in the 90s does not mean it is not still a useful word.) and start figuring out how to implement change.

How? That will have to come with my next post.

Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality. Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.

In these days, I’m exploring the combination of HoloLens/Windows Mixed Reality and the capabilities offered by Cognitive Services to analyse and extract information from images captured via the device camera and processed using the Computer Vision APIs and the intelligent cloud.In this article, we’ll explore the steps I followed for creating a Unity application running on HoloLens and communicating with the Microsoft AI platform.

Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality. Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.

Azure Mobile Apps are built on Azure App Services. Through the Azure portal you can configure your Azure Mobile App to provide sign in, push notifications, and data synchronization. When you configure sign in through the Azure portal for your Azure Mobile App by default you are leveraging the “server directed” or “server managed” authentication flow. While this approach will suffice for simple application scenarios “client directed” or “client managed” authentication is the preferred method for authenticating with an Azure Mobile App.

Using client directed authentication your mobile application independently contacts the identity provider and then includes the returned access token during login with your Azure Mobile App rather than relying on the Azure Mobile App service to handle the exchange with the identity provider. Client directed authentication is preferred to server directed authentication as the identity provider SDK provides a more native user experience, supports single sign-on, and refresh support.

In this article we will walk through using client directed authentication with Azure Active Directory to authenticate to an Azure Mobile App from a Xamarin.Forms mobile application.

Setting Up the Environment

Prior to writing our code we need to perform two actions in the Azure portal. First, we must create an Azure Mobile App and register that web application with Azure Active Directory. Second, we must register a native client application with Azure Active Directory and grant it access to call the Azure Mobile App. An example of each Azure Active Directory application registration is shown below.

While registering your applications also make sure to note the directory ID on the Properties blade of your Azure Active Directory. You will need this later.

With our mobile app and native client registered in Azure Active Directory we need to enable authentication on our mobile app in the Azure portal. When configuring our Azure Mobile App we will enable App Service Authentication and then configure Azure Active Directory as an Authentication Provider. On the Active Directory Authentication blade we will select the Advanced option and enter the client ID of the web application registered earlier and the issuer URL. The issuer URL will be https://sts.windows.net/ followed by your Azure Active Directory ID. In the Allowed Token Audiences field we will enter the App ID URI of the web application registered earlier.

With our mobile app created, registered, and configured we are ready to begin work on our code.

The Visual Studio Solution

Our Visual Studio solution is made up of two projects. The first is an ASP.Net web application that leverages the Azure Mobile App project template. The second project is a Xamarin.Forms application that uses the Blank App project template with Shared Project as the Code Sharing Strategy.

Since the goal of this exercise is to authenticate with the Azure Mobile App using client-directed authentication we will only make a minor change to the ASP.Net application. Open the ValuesController.cs file in the Controllers folder and add an [Authorize] attribute to the class definition. This will ensure that only authenticated users can call the methods of the ValuesController class. Publish your application to the Azure Mobile App Service you created in your Azure subscription.

With work on the Azure Mobile App complete we can turn our attention to the Xamarin.Forms project. Prior to doing anything else open the NuGet package manager and update all of the packages used by the Xamarin.Forms project to their latest version. Once that is complete rebuild the solution and verify that it builds without errors. Open the NuGet package manager again and add a reference to Microsoft.Azure.Mobile.Client and Microsoft.IdentityModel.Clients.ActiveDirectory. Again rebuild the solution and verify that it builds without errors.

Open the MainPage.xaml file in the shared project and modify its contents as follows.

The Login button will be used to trigger our authentication with Azure Active Directory. The lblHello will display the user’s displayable ID when they authenticate successfully. The lblMobileServiceStatus will display the results of our calls to the Azure Mobile App.

Next we will create a simple class named ServiceManager in the shared project that provides a singleton instance of the Microsoft.WindowsAzure.MobileServices.MobileServiceClient class. The implementation of the ServiceManager class is shown below. Make sure to update the value of the _serviceUrl variable to contain the URL of your Azure Mobile App Service.

4. If authentication is successful, login to the Azure Mobile App using the access token returned by Azure Active Directory and call the default Get method on the ValuesController of the Azure Mobile App.

That completes the changes to the MainPage.xaml.cs file. Now we need to make some minor changes to each platform specific project to support authentication to Azure Active Directory.

Add a new class named MainPageRenderer to each platform specific project (MobileAppClient.Android, MobileAppClient.iOS, and MobileAppClient.UWP). That class should inherit from PageRenderer. Add the following assembly attribute above the namespace declaration in each class

The first parameter references the Xamarin.Forms page in the shared project and the second parameter identifies the page renderer that will be used to render the page on the platform. We use this approach because the implementation of the IPlatformParameters interface is slightly different on each platform.

Make the following changes to the MainPageRenderer.cs file in the Android project.

With these changes in place the PlatformParameters property will be correctly populated on each platform before the code attempts to authenticate with Azure Active Directory. Rebuild the solution and run it on each platform. You should be able to successfully authenticate and invoke the Values API on the Azure Mobile App Service.

Wrapping Up

This post has provided you with the basic information needed to leverage client directed authentication with Azure Active Directory to authenticate with an Azure Mobile App Service. This approach can be used with any of the identity providers supported by the Azure Mobile App Service such as Azure Active Directory, Facebook, Google, Microsoft Account, and Twitter.

Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality. Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.

Sometimes the way a web application needs to manage state can have a profound impact on scalability and the load balancing configuration. There are many different options (both software and hardware) to provide load balancing solutions, but they all share some core concepts. Generally, you start by considering the application requirements for session persistence:

Sticky persistence: If the session data is maintained on the Web Server itself (typically in the memory - e.g. Inproc session state in ASP.NET), you would want to configure your load balancer in a way that the requests from same Client is directed to the same web server. Sticky sessions can also be used when you are dependent on local Web Server resources (such as file system).

Non-Sticky persistence: If the session data is maintained in a database or distributed caching system (shared by all the web servers), it does not matter which Web Server the request is routed to as all the servers will have access to the shared session store.

Load Balancers can be configured primarily in the following ways for SSL scenarios

SSL offloading (or SSL termination): In this configuration, the load balancer receives https request from the client (e.g. a browser), decrypts the requests and creates a new request (http or https) and sends it to the web server. Web Server sends the response back to the load balancer which in turn sends the response back to the browser. In this case, the SSL certificate needs to installed at the Load Balancer (LB) as it handles the encryption / decryption.It is lot easier to configure sticky persistence in SSL offloading since the LB can decrypt the request and use the underlying session cookies to route traffic to a specific Web Server. Load Balancers maintain a route table and use that to determine which request with specific session cookie goes to what backend Web Server. You can use other parameters such as Client IP etc. to determine the stickiness but using session cookie is probably the best idea.

Pass through: In a pass through, Load Balancer just redirects traffic coming from Client to the Web Servers. If it is an https requests, load balancer's cannot see what is inside the request.In this case, there is Client IP is the most useful information that can be used to maintain stickiness. This may work primarily in a controlled Intranet environment, however, this is not optimal as many clients may have same IP (hidden behind a single IP), or Clients can change the IP during a session or sometimes a single client can use multiple IP addresses.

Ideally, you want to design applications for non-sticky persistence which adheres to rest principles and more easily scales to handle high capacity demands. In this case, it does not matter which server the request goes to and load balancers can effectively route the request to web servers using the load balancing algorithms best suited for your scenarios.

Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality. Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.

]]>https://blogs.msdn.microsoft.com/premier_developer/2018/03/13/considerations-for-load-balancing-your-web-applications/feed/0Extensions to Application Insights Telemetry Collectionhttps://blogs.msdn.microsoft.com/premier_developer/2018/03/13/extensions-to-application-insights-telemetry-collection/
https://blogs.msdn.microsoft.com/premier_developer/2018/03/13/extensions-to-application-insights-telemetry-collection/#respondTue, 13 Mar 2018 14:56:00 +0000https://blogs.msdn.microsoft.com/premier_developer/?p=20075Application Development Manage Isaac Levin recently posted this article on building extensions for Application Insights. In this post, he demonstrates how to capture additional HTTP metadata using custom telemetry Initializers for Application Insights.I will start off by saying I love Application Insights. I have been using it for a long time, and am delighted at the new roll-out of features for it. I have even been giving a talk on Application Insights and how easy it is to instrument your application, so check that out if you are interested. One thing that is great about Application Insights is how extendable it is. The nature of how the data is structured allows a developer to add custom metadata to the telemetry, as well as add filter out telemetry based on specific criteria. Whenever I spin up a new app, I always notice that I add a handful of extensions to the telemetry collection process and thought it would be helpful to share.

Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality. Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.

Reaching your full potential as a developer requires you to be highly effective. In this blog, I will discuss some principles that are important for modern developers to be successful. The ideas generated from these principles are based on Steve Covey best-selling book, The 7 Habits of Highly Effective People. As developers, our primary goals is to make things easier and/or to create more engaging experiences for users. These seven principles look at a modern approach to achieving that primary goal.

1. Be Proactive with DevOps

a. Being reactive doesn't allow you to be innovative. DevOps gives control over the process and tools for building, testing and releasing software applications. For many years and even today, some teams only release software on the weekend or in the middle of the night. This can be because developers and technology operations resources haven't integrated or accepted a DevOps culture to embrace automating software delivery through continuous integration and deployment. Being Proactive with DevOps increases reliability of environment resources and can be helpful when automating repeatable task.

a. For a long time at Microsoft, we've believed that we could create all the products and tools that would solve any problem leveraging great dev teams and driving a widely adopted products. More recently, we have changed this type of thinking and embraced Open Source technologies and services as an integrated part in developing solutions for customers. Today's effective developers realize that the .NET platform and Windows Server can work well for many solutions, but they are open to using Linux Server and other development platforms to solve problems.

Check out some of the story on How Microsoft has embraced Open Source tools and services like Kubernetes, Node.js,Chef, etc..

3. Put the Cloud First

a. It's extremely important to consider a Cloud First approach in your app development. The Cloud helps remove barriers and creates flexibility, scalability, and availability for your applications' services. A cloud first approach allows developers to focus on innovation and not managing networks, operating systems, and storage needs-- allowing technology resources to focus on more strategic responsibilities and outcomes.

a. Rethinking the way we design and architecture applications for a variety of platforms, devices, services and consumers can be challenging. When we understand the benefits of Containers and closely examine opportunities for serverless computing, we can transform monolithic legacy applications. Serverless applications helps to reduce code and speeds up the development process for scale. Effective developers use containers to maximize deployment flexibility and serverless as an option for integrate scaling, hosting, and monitoring easily.

a. Now that Mobile is a part of most users digital experience, it has an opportunity to empathically listen and understand customer needs. This can lead to a more powerful and engaging experience with Artificial Intelligence. Cognitive, Machine Learning and Bot Services give developers new, exciting, and unexpected way of understanding and interacting with data through voice, video, images and text.

See how to build these engaging Mobile and AI experiences with Microsoft platforms and tools

6. Synergize through Insights

a. We can gain Insights through various forms of telemetry. Independently each area will have limited value, but the synergy of all insights will lead to opportunities of new services and a deeper understanding of the customer experience for developers. When developers have complete visibility into applications this means they can monitor events, app performance, exceptions and session details to help diagnose issues for users across the entire solution stack.

These new habits give us something to think about and work towards as we become more effective in our daily activities. Almost 10 years ago, a former colleague of mines, John Powell, wrote a blog on The 7 Habits of Highly Effective Developers that made sense for developers in 2008. While these principles can still be effective, there are a lot of new capabilities and opportunities to consider as a developer today.

Stay #Winning and keep developing amazing experiences my friends…

Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality. Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.

The first part of this blog will go over how to create a sample ASP.NET Core web application with Docker support. We will use this as our demo app to deploy to the Kubernetes cluster. Then we will go over how VSTS can be used to create a CI build that will build the application, package the build output into a Docker image, and push the image to Docker Hub. After that, we will point you to resources that will show how you can create a test OpenShift Kubernetes cluster. Finally, we will go over how VSTS Release Management can be used to continuously deploy to the OpenShift Kubernetes cluster. As you might have guessed, this might not be easy to setup. Luckily, the Continuous Integration (CI) and Continuous Deploy (CD) aspect of this is greatly simplified by VSTS as you will see later. Let’s get to work.