Post navigation

How has the world of software development changed in the era of Docker and Kubernetes? Is it possible to build an architecture once and for all using these technologies? Is it possible to unify the processes of development and integration when everything is “packed” in containers? What are the requirements for such decisions? What restrictions do they bring into play? Will they make life of developers easier or, instead, add unnecessary complications to it?

It’s time to shed the light on these (and some other) questions! (In text and original illustrations.)

This article will take you on a journey from real life to development processes to architecture and back to real life, giving answers to the most important questions at each of these stops along the way. We will try to identify a number of components and principles that should become part of an architecture and demonstrate a few examples without going into the realms of their implementation.

The conclusion of the article may upset or please you. It all depends on your experience, your perception of this three-chapter story, and perhaps even your mood at the time of reading. Let me know what you think by posting comments or questions below!

From real life to development workflows

For the most part, all development processes that I have ever seen or been honored to set up served one simple goal—reduce the time between the birth of an idea and its delivery to production, while maintaining a certain degree of code quality.

It doesn’t matter whether the idea is good or bad. Bad ideas come and go fast—you just try them and turn them down to disintegrate. What’s worth mentioning here is that rolling back from a bad idea falls on the shoulders of a robot which automates your workflow.

Continuous integration and delivery seem like a life saver in the world of software development. What can be simpler than that, after all? You have an idea, you have the code, so go for it! It would’ve been flawless if not for a slight problem—the process of integration and delivery is rather difficult to formalize in isolation from the technology and business processes that are specific to your company.

However, despite the seeming complexity of the task, life constantly throws in excellent ideas that bring us (well, myself for sure) a little closer to building a flawless mechanism that can be useful in almost any occasion. The most recent step to such mechanism for me has been Docker and Kubernetes, whose level of abstraction and the ideological approach made me think that 80% of issues can now be solved using practically the same methods.

The remaining 20% obviously didn’t go anywhere. But this is exactly where you can focus your inner creative genius on interesting work, rather than deal with the repetitive routine tasks. Taking care of the “architectural framework” just once will let you forget about the 80% of solved problems.

What does all of this mean, and how exactly does Docker solve the problems of the development workflow? Let’s look at a simple process, which also happens to be sufficient for a majority of work environments:

With due approach, you can automate and unify everything from the sequence below, and forget about it for months to come.

Setting up a development environment

A project should contain a docker-compose.yml file, which can spare you a trouble of thinking about what and how you need to do to run the application/service on the local machine. A simple docker-compose upcommand should start up your application with all its dependencies, populate the database with fixtures, upload the local code inside the container, enable code tracing for compilation on the fly, and eventually start responding at the expected port. Even when setting up a new service, you needn’t worry about how to start, where to commit changes or which frameworks to use. All of this should be described in advance in the standard instructions and dictated by the service templates for different setups: frontend,backend, and worker.

Automated testing

All you want to know about the “black box” (more about why I call container this will follow later in text) is that everything’s all right inside it. Yes or no. 1 or 0. Having a finite number of commands that can be executed inside the container, and docker-compose.yml describing all of its dependencies, you can easily automate and unify testing without focusing too much on the implementation details.

Here testing means not only and not so much unit testing, but also functional testing, integration testing, testing of (code style) and duplication, checking for outdated dependencies, violation of licenses for used packages, and many other things. The point is all of this should be encapsulated inside your Docker image.

Systems delivery

It doesn’t matter when and where you want to install your project. The result, just like the installation process, should always be the same. There is also no difference as to which part of the whole ecosystem you’re going to be installing or which git repo you’ll be getting it from. The most important component here is idempotence. The only thing that you should specify is the variables that control the installation.

Here’s the algorithm that seems to me quite effective at solving this problem:

Using a meta-project, deliver these images to Kubernetes via Kube API. Initiating a delivery usually requires several input parameters:

Kube API endpoint

an “secret” object that varies for different contexts (local/showroom/staging/production)

the names of the systems to display and the tags of the Docker images for these systems (obtained at the previous step)

As an example of a meta-project that encompasses all systems and services (in other words, a project that describes how the ecosystem is arranged and how updates are delivered to it), I prefer to use Ansibleplaybooks with this module for integration with Kube API. However, sophisticated automators can refer to other options, and I’ll dwell on my own choices later. However, you have to think of a centralized/unified way of managing the architecture. Having one will let you conveniently and uniformly manage all services/systems and neutralize any complications that the upcoming jungle of technologies and systems performing similar functions may throw at you.

Typically, an installation of the environment is required in:

“ShowRoom”—for some manual checks or debugging of the system

“Staging”—for near-live environments and integrations with external systems (usually located in the DMZ as opposed to ShowRoom)

“Production”—the actual environment for the end user

Continuity in integration and delivery

If you have a unified way of testing Docker images—or “black boxes”—you can assume that the results of such tests would allow you to seamlessly (and with clear conscience) integrate feature-branch in the upstream or masterbranches of your git repository.

Perhaps, the only deal breaker here is the sequence of integration and delivery. When there is no releases, how do you prevent a “race condition” on one system with a set of parallel feature-branches?

Therefore, this process should be started only when there’s no competition, otherwise the “race condition” will keep haunting your mind:

Try to update the feature-branch to upstream (git rebase/merge)

Build images from Dockerfiles

Test all the built images

Start and wait until the systems with the images from step 2 are delivered

If the previous step failed, roll back the eco-system to the previous state

Merge feature-branch inupstream and send it to the repository

Any failure at any step should terminate the delivery process and return the task to the developer to fix the error, whether it’s a failed test or a merge conflict.

You can use this process to work with more than one repository. Just do each step for all repositories at once (step 1 for repos A and B, step 2 for repos A and B, and so on), instead of doing the whole process repeatedly for each individual repository (steps 1–6 for repo A, steps 1–6 for repo B, and so on).

In addition, Kubernetes allows you to roll out updates in parts for carrying out various AB tests and risk analysis. Kubernetes does it internally by separating services (access points) and applications. You can always balance the new and the old versions of a component in a desired proportion to facilitate problem analysis and make way for a potential rollback.

Rollback systems

One of the mandatory requirements to an architectural framework is an ability to reverse any deployment. This, in turn, entails a number of explicit and implicit nuances. Here are some of the most important of them:

A service should be able to set up its environment as well as rollback changes. For example, database migration, RabbitMQ schema and so on.

If it’s not possible to rollback the environment, the service should be polymorphic and support both the old and the new versions of the code. For example: database migrations shouldn’t disrupt the old versions of the service (usually, 2 or 3 past versions)

Backwards compatibility of any service update. Usually, this is API compatibility, message formats and so on.

It is fairly simple to rollback states in a Kubernetes cluster (run kubectl rollout undo deployment/some-deploymentand Kubernetes will restore the previous “snapshot”), but for this to work, your meta-project should contain information about this snapshot. More complex delivery rollback algorithms are highly discouraged, although they are sometimes necessary.

Here’s what can trigger the rollback mechanism:

High percentage of application errors after a release

Signals from key monitoring points

Failed smoke tests

Manual mode—human factor

Ensuring information security and audit

There is no single workflow that can magically “build” bulletproof security and protect your ecosystem from both external and internal threats, so you need to make sure that your architectural framework is executed with an eye on the standards and security policies of the company at each level and in all subsystems.

I will address all three levels of the proposed solution later, in the section about monitoring and alerting, which themselves also happen to be critical to system integrity.

Kubernetes has a set of good built-in mechanisms for access control, network policies, audit of events, and other powerful tools related to information security, which can be used to build an excellent perimeter of protection that can resist and prevent attacks and data leaks.

From development workflows to architecture

An attempt to build a tight integration between the development workflows and the ecosystem should be taken seriously. Adding a requirement for such integration to the traditional set of requirements to an architecture (flexibility, scalability, availability, reliability, protection against threats, and so on), can greatly increase the value of your architectural framework. It is so crucial an aspect that it has resulted in the emergence of a concept called “DevOps” (Development Operation), which is a logical step towards the total automation and optimization of the infrastructure. However, granted a well-designed architecture and reliable subsystems, DevOps tasks can be minimized.

Micro-service architecture

There is no need to go into details of the benefits of a service oriented architecture—SOA, including why services should be “micro”. I will only say that if you have decided to use Docker and Kubernetes, then you most likely understand (and accept) that it’s difficult and even ideologically wrong to have a monolithic architecture. Designed to run a single process and work with persistence, Docker forces us to think within the DDD framework (Domain-Driven Development). In Docker, packed code is treated as a black box with some exposed ports.

Critical components and solutions of the ecosystem

From my experience of designing systems with an increased availability and reliability, there are several components that are crucial to the operation of micro-services. I’ll list and talk about each of these components later, and even though I’ll be referring to them in the context of a Kubernetes environment, you can refer to my list as a checklist for any other platform.

If you (like me) will have come to the conclusion that it’d be great to manage each of these components as a regular Kubernetes service, then I’d recommend you to run them in a separate cluster other than “production”. For example, a “staging” cluster, because it can save your life when the production environment is unstable and you desperately need a source of its image, code, or monitoring tools. That solves the chicken and the egg problem, so to speak.

Identity service

As usual, it all starts with access—to servers, virtual machines, applications, office mail, and so on. If you are or want to be a client of one of the major enterprise platforms (IBM, Google, Microsoft), the access issue will be handled by one of the vendor’s services. However if you want to have your own solution, managed only by you and within your budget?

This list should help you decide on the appropriate solution and estimate the effort required to set up and maintain it. Of course, your choice must be consistent with the company’s security policy and approved by the information security department.

Automated service provisioning

Although Kubernetes requires having only a handful components on physical machines/cloud VMs (docker, kubelet, kube proxy, etcd cluster), you still need to automate the addition of new machines and cluster management. Here is a few simple ways to do it:

KOPS—this tool allows you to install a cluster on one of the two cloud providers—AWS or GCE

Teraform—this lets you manage the infrastructure for any environment, and follows the ideology of IAC (Infrastructure as Code)

Personally, I prefer the third option (with a little Kubernetes integration module), since it allows me to work with both servers and k8s objects and carry out any kind of automation. However, nothing stops you from using Teraform and its Kubernetes module. KOPS doesn’t work well with the “bare metal” but it’s still a great tool to use with AWS/GCE, too!

Git repository and a task tracker

The only way for any Docker container to make its logs accessible is to write them to STDOUT or STDERR of the root process running in the container. Service developer doesn’t really care what happens next with the logs data, the main thing is that they should be available when necessary and preferably contain records to a certain point in the past. All responsibility for fulfilling these expectations lies with Kubernetes and the engineers who support the ecosystem.

In the official documentation, you can find a description of the basic (and a good one) strategy for working with logs, which will help you choose a service for aggregation and storage of huge text data.

Among the recommended services for a logging system, the same documentation mentions fluentd for collecting data (when launched as an agent on each node of the cluster), and Elasticsearch for storing and indexing data. Even if the efficiency of either You may disagree with the efficiency of this solution, but it’s reliable and easy to use so I think it’s at least a good start.

Elasticsearch is a resource-intensive solution but it scales well and has ready-made Docker images to run both an individual node and a cluster of a required size.

Tracing system

As perfect as your code can be, failures do happen and then you want to study them with a fine-tooth comb on production and try to understand “what the hell went wrong if everything worked fine on my local machine?”. Slow database queries, improper caching, slow disks or connectivity with an external resource, transactions in the ecosystem, bottlenecks and under-scaled computing services are some of the reasons why you will be forced to track and estimate the time spent executing your code under a real load.

Opentracing and Zipkin cope with this task for most modern programming languages and without adding any extra burden after instrumenting the code. Of course all the collected data should be stored in a suitable place, which is used as one of the components.

The complexities that arise when instrumenting the code and forwarding “Trace Id” through all the services, message queues, databases, and so on are solved by the above-mentioned development standards and service templates. The latter also take care of uniformity of the approach.

Monitoring is one of the few auxiliary systems that must be installed inside a cluster. And the cluster is an entity that is subject to monitoring. But monitoring of a monitoring system (pardon the tautology) can only be performed from the outside (for example, from the same “staging” environment). In this case, cross-checking comes in handy as a convenient solution for any distributed environment, which wouldn’t complicate the architecture of your highly unified ecosystem.

The whole range of monitoring is divided into three completely logically isolated levels. Here is what I think are the most important examples of tracking points at each level:

Cluster level:—Availability of the main cluster systems on each node (kubelet, kubeAPI, DNS, etcd, and so on)—The number of free resources and their uniform distribution—Monitoring of permitted vs. actual resource consumption by services—Reloading of pods

Service level:—Any kind of application monitoring—from database contents to a frequency of API calls—Number of HTTP errors on the API gateway—Size of the queues and the utilization of the workers—Multiple metrics for the database (replication lag, time and number of transactions, slow requests and more)—Error analysis for non-HTTP processes—Monitoring of requests sent to the log system (you can transform any requests into metrics)

As for the alert notifications at each level, I’d like to recommend using one of the countless external services that can send notifications to email, SMS or make calls to a mobile number. I’ll also mention another system—OpsGenie—which has a close integration with the Prometheus’s alertmanager.

OpsGenie is a flexible tool for alerting which helps to deal with escalations, round-the-clock duty, notification channel selection and much more. It’s also easy to distribute alerts among teams. For example, different levels of monitoring should send notifications to different teams/departments: physical—Infra + Devops, cluster—Devops, applications—each to a relevant team.

API gateway and Single Sign-on

To handle tasks such as authorization, authentication, user registration (external users-clients of the company) and other kinds of access control, you will need a highly reliable service that can maintain a flexible integration with your API gateway. No harm using the same solution as for the “Identity service”, however you may want to separate the two resources to achieve a different level of availability and reliability.

The inter-service integration should not be complicated and your services should not worry about authorization and authentication of users and each other. Instead, the architecture and the ecosystem should have a proxy service that handles all communications and HTTP traffic.

Let’s consider the most suitable way of integration with the API gateway, hence with your entire ecosystem—tokens. This method is good for all three access scenarios: from the UI, from service to service, and from an external system. Then the task of receiving a token (based on the login and password) lies with the UI itself or with the service developer. It also makes sense to distinguish between the lifetime of tokens used in the UI (shorter TTL) and in other cases (longer and custom TTL).

Here are some of the problems that the API gateway resolves:

Access to the ecosystem’s services from outside and within (services do not communicate directly with each other)

Integration with a Single Sign-on service:—Transformation of tokens and appending HTTPS requests with headers that containing user identification data (ID, roles, other details) for the requested service—Enabling/disabling access control to the requested service based on the roles received from the Single Sign-on service

Ability to manage routing for the entire ecosystem based on the domains and requested URIs

Single access point for external traffic, and integration with the access provider

Event bus and Enterprise Integration/Service bus

If your ecosystem contains hundreds of services that work in one macro domain, you will have to deal with thousands of possible ways in which the services can communicate. To streamline data flows, you should think of the ability to distribute messages over a large number of recipients upon the occurrence of certain events, regardless of the contexts of the events. In other words, you need an event bus to publish events based on a standard protocol and to subscribe to them.

As an event bus, you can use any system that can operate a so-called broker: RabbitMQ, Kafka, ActiveMQ, and others. In general, high availability and consistency of data are critical for the micro services, but you’ll still have to sacrifice something to achieve a proper distribution and clustering of the bus, due to the CAP theorem.

Naturally, the event bus should be able to solve all sorts of problems of inter-service communication, but as the number of services grows from hundreds to thousands to tens of thousands, even the best event bus-based architecture will fail, and you will need to look for another solution. A good example would be an integration bus approach, which can extend the capabilities of the “Dumb pipe—Smart consumer” tactics described above.

There are dozens of reasons for using the “Enterprise Integration/Service Bus” approach, which aims to reduce the complexities of a service-oriented architecture. Here’s just a few of these reasons:

Aggregation of multiple messages

Splitting of one event into several events

Synchronous/transactional analysis of the system response to an event

Coordination of interfaces, which is especially important for integration with external systems

Advanced logic of event routing

Multiple integration with the same services (from the outside and within)

Non-scalable centralization of the data bus

As an open-source software for an enterprise integration bus, you may want to consider Apache ServiceMix, which includes several components essential for design and development of this kind of SOA.

Databases and other stateful services

As well as Kubernetes, Docker has changed the rules of the game once and for all for services that require data persistence and work closely with the disk. Some say that the services should “live” the old way on physical servers or virtual machines. I respect this opinion and I won’t go into arguments about its pros and cons, but I’m fairly certain that such statements exist only because of the temporary lack of knowledge, solutions, and experience in managing stateful services in a Docker environment.

I should also mention that a database often occupies the central place in the storage world, and therefore the solution you select should be fully prepared to work in a Kubernetes environment.

Based on my experience and the market situation, I can distinguish the following groups of stateful services along with examples of the most suitable Docker-oriented solutions for each of them:

Database management systems—PostDock is a simple and reliable solution for PostgreSQL inside any Docker environment

Queue/message brokers—RabbitMQ is a classic software for building a message queue system and routing the messages. The cluster_formationparameter in the RabbitMQ’s configuration is indispensable for a cluster setup

Caching services—redis considered one of the most reliable and flexible data caching solutions

Full text search—the Elasticsearch stack which I have already mentioned above, originally used for full-text search, but just as good at storing logs and for any kind of work with large amounts of text data

File storage services—a generalizing group of services for any type of file storage and delivery (ftp, sftp, and so on)

Dependency mirrors

If you haven’t yet encountered a situation where the packages or dependencies you need have been removed or made temporarily unavailable from a public server, don’t think that this will never happen. To avoid any unwanted unavailability and provide security to the internal systems, make sure that neither building nor delivery of your services require an Internet connection. Configure mirroring and copying of all dependencies to the internal network: Docker images, rpm packages, source repositories, python/go/js/php modules.

Like it or not, sooner or later your entire architecture will be doomed to failure. It always happens: technologies become obsolete fast (1–5 years), methods and approaches—a bit slower (5–10 years), design principles and fundamentals—occasionally (10–20 years), yet inevitably.

Mindful of the obsolescence of technology, always try to keep your ecosystem at the peak of tech innovations, plan and roll out new services to meet the needs of developers, business and end users, promote new utilities to your stakeholders, deliver knowledge to move your team and the company forward.

Stay on top of the game by integrating into the professional community, reading relevant literature and socializing with colleagues. Be aware of your opportunities as well as the correct use of new trends in your project. Experiment and apply scientific methods to analyze the results of your research, or rely on the conclusions of other people that you trust and respect.

It is difficult to prepare yourself for fundamental changes, yet possible if you are an expert in your field. All of us will only witness a few major technological changes throughout our life, but it’s not the amount of knowledge in our heads that makes us professionals and brings us to the top, it’s the openness of to our ideas and the ability to accept metamorphosis.

I recently held a talk for http://www.dotnet-developer-conference.de/ About – Microservices and Scalable Backends with Azure Service Fabric, and also a Transition path from a legacy application to scalable application.

Again, a sensation to the topic Microsoft and Linux: Microsoft joins the Linux Foundation and is a Board Member there in the future! At the same time, Google is member of the .NET Foundation!

Microsoft has unveiled a Visual Studio for Mac OS X. The new IDE is however no porting existing Visual Studio, which runs only on Windows, but only an extended version of the product beginning 2016 in relation to the acquisition by Xamarin with purchased Xamarin Studio, which in turn is based on the free Mono develop. The new IDE supports the first step in the Microsoft strategy “mobile first, cloud first” only the development by Xamarin apps for iOS and Android, as well as Web applications and REST based Web services with .NET core. The programming languages can choose the developer between c# and f #.

Visual Studio “15” is now called Visual Studio 2017 and is now available as a release candidate.

Visual Studio 2017 and Visual Studio for Mac now offer a graphical view of the preview for Xamarin forms. So far, developers could write XAML tags in the editor and only saw the result at run time. Graphic design by mouse, the developers of WindowsPresentation Foundation (WPF), Silverlight or Windows universal apps are used, is still not possible on a Xamarin Forms interface.

In entity framework core 1.1, Microsoft provides some programming functions from the predecessor entity framework 6.1 again at the disposal, which were missing in entity framework core 1.0. This searches for objects in the cache with find(), explicitly load of related objects with load() method, renewed preloaded objects with reload()loading as well as the simple query of object content, old and new, with GetDatabaseValues() and GetModifiedProperties(). Also the mapping to simple fields instead ofproperties and the recovery of lost database connections (connection resiliency) to the Microsoft SQL Server or SQL azure is possible again. The ability to map objects to the memory-optimized tables from Microsoft SQL Server 2016 is brand new. In addition, the makers have simplified the API with the standard functions can be replaced by entity framework core by their own implementations.

.NET 1.1 core Linux Mint 18, 42.1, OpenSUSE come with macOS 10.12 and WindowsServer 2016 as operating systems added. Samsung delivers core for Tizen also .NET.A further step in the direction of cross-platform compatibility of Microsoft products.NET represents core for the Tizen operating system, the Samsung, since June 2016Member has developed the .NET Foundation, together with relevant Visual Studio Tools.

The Team Foundation Server “15” 2017 receives the version number like Visual Studio. He had previously reached the stage release candidate 2 is now available as RTM version available

The new Visual Studio Mobile Center is a cloud application, the hosted on Github source code for IOS and Android apps automatically translated at each commit, testson real hardware and distributed successfully tested app packets to beta testers. Also, a usage analysis and run-time failure is possible. The supported programming languages are swift, objective-C, Java and c# (Xamarin). Support for Cordova and Windows universal apps is planned.

SQL Server for Linux: In March, Microsoft announced to provide the SQL Server in the future also for Linux. There is now a preview version of the database server for RedHat Enterprise Linux, Ubuntu Linux, macOS and Windows, as well as docking available. A version for SUSE Linux Enterprise Server is to follow soon. The latest release is called SQL Server Community Technology Preview vNext and represents an evolutionof SQL Server 2016, with new features in addition to the platform independence were shown in the keynote, nor previously listed on the website.

For the SQL Server 2016, a Service Pack 1 available now, that contains significant improvements in the licensing model in addition to bug fixes.

For Mac developers , Xamarin Studio is now available as a benefit of Visual Studio Professional or Enterprise subscriptions. Developers can use the newly-created Xamarin Studio Community Edition for free.

Another big announcement it is that The Mono Project it is added to the .NET Foundation, including some previously-proprietary mobile-specific improvements to the Mono runtime. Mono will also be re-released under the MIT License, to enable an even broader set of uses for everyone. More details to the Mono Project blog.

The changes to Mono remove all barriers to adopting a modern, performing .NET runtime in any software product, embedded device, or engine, and open the door to easily integrate C# with apps and games on iOS, Android, Mac, and Windows and any emerging platforms developers want to target in future.

Xamarin it is the Manufacturer of tools for cross-platform development based on C#. In the company, there are also the developers, the Mono and Moonlight had developed as an open-source alternative to .NET and Silverlight.

The Californian company Xamarin provides a app-development environment based on the C # programming language and .NET classes. In addition to iOS, OS X, Android and Windows Xamarin supports now also TVOS, WatchOS and Playstation. Basis for Xamarin products is Mono , an open source implementation of Microsoft’s .NET Framework, which have existed since 2,001th

Come together what belongs together?

Mono is a different implementation because Microsoft then the source code of .NET is not declared as “open source”, but as “Shared Source”, the “Just look, do not touch” after the motto further use of the source code is not allowed. The Mono Project has in complex work the .NET runtime environment, the C # compiler and a large part of the .NET class library rebuilt and endeavored thus to remain compatible with Microsoft’s model.

Now is true – Microsoft has Xamarin, a manufacturer of tools for cross-platform development, adopted

Why two implementations!

.NET Developers can now hope that the two platforms together grow closer. While many base classes are uniform, there has been no sophisticated user interface technology that can run on all platforms. With Windows Presentation Foundation (WPF) , Windows runtime XAML and Xamarin Forms instead there are three dialects of markup language extensible Application Markup Language (XAML) , but which are not entirely compatible. Xamarin Forms running at least iOS and Android as well as Windows 10 Universal apps, still lies in the development far behind other XAML dialects.

Also appears in times when Microsoft .NET Framework as .NET Core itself open source and platform-neutral development, the continuation of Mono as competition for .NET Core no longer meaningful.

Shortly before Christmas 2015 was the TFS 2015 Update 1 released, within this release are some new features that bring more productivity to Team Development and DevOps.

Here are some of my favorites :

TFVC and Git in the same Team Project

Lots of my customers have TFVC (centralized version control) in TFS. When Git support came out, the only option they had if they wanted to switch to Git was to create a new “Git-based” Team Project and port source code over. Then they got into a horrible situation where work items were all in the TFVC Team Project, and the source code was in the new Git Team Project.

Now, you can simply add a new Git repo to an existing TFVC Team Project! Navigate to the Code hub in Web Access, click the repository drop-down (in the top left of the Code pane) and select “New Repository”:

Enter the name of your repo and click Create. You’ll see the new “Empty Git page” (with a handy “Clone in Visual Studio” button):

The Repository drop-down now shows multiple repos, each with their corresponding TFVC or Git icon:

You can also add TFVC to a Git Team Project! This makes sense if you want to source control large assets. That way you can have your code in Git, and then source control your assets in TFVC, all in the same team project.

If you’re looking for alternatives to supporting large files in Git, then you’ll be pleased to note that VSO supports Git-LFS. Unfortunately, it’s not in this CTP – though it is planned for the Update 1 Release. As a matter of interest, the real issue is the NTLM authentication support for Git-LFS – the product team are going to submit a PR to the GitHub Git-LFS repo so that it should be supported by around the time Update 1 releases.

Query and Notifications on Kanban Column

Customizing Kanban columns is great – no messing in the XML WITD files – just open the settings, map the Kanban column to the work item states, and you’re good to go. But what if you want to query on Kanban column – or get a notification if a work item moves to a particular column? Until now, you couldn’t.

If you open a work item query editor, you’ll see three additional fields that you can query on:

Board Column – which column the board is in. Bear in mind that the same work item could be in different columns for different teams.

Board Column Done – corresponding to the “Doing/Done” split

Board Lane – the swimlane that the work item is in

Not only can you query on these columns, but you can also add alerts using these fields. That way you could, for example, create an alert for “Stories moved to Done in the Testing column in the Expedite Lane”.

Pull Requests in Team Explorer

You’ll need Visual Studio 2015 Update 1 for this to work. Once you have Update 1, you’ll be able to see and create Pull Requests in the Team Explorer Pull Requests tile. You can also filter PRs and select between Active, Completed and Abandoned PRs. There are PRs you’ve created as well as PRs assigned to you or your team. Clicking a PR opens it up in Web Access:

Team Board Enhancements

There’s a lot to discuss under this heading. If you’re using the Kanban boards, you’ll want to upgrade just for these enhancements.

Team Board Settings Dialog

The Board Settings Dialog has been revamped. Now you can customize the cards, columns, CFD and team settings from a single place – not a single “admin” page in sight! Just click the gear icon on the top right of a Backlog board, and the Settings dialog appears:

Field Settings

TFS 2015 RTM introduced field customization, so not much has changed here. There’s an additional setting that allows you to show/hide empty fields – if you’ve got a lot of cards, hiding empty fields makes the cards smaller where possible, allowing more cards on the board than before.

Customisable Styles

You can now set conditional styling on the cards. For example, I’ve added some style rules that color my cards red (redder and reddest) depending on the risk:

You can drag/drop the rules (they fire in order) and of course you can rules for multiple fields and conditions. You can change the card color and/or the title color (and font style) if the condition matches. Here’s my board after setting the styles:

Tag Coloring

You can now colorize your tags. You can see the iPhone and WindowsPhone tags colored in the board above because of these settings:

Team Board Settings

Under board settings, you’ll be able to customize the Columns (and their state mappings, Doing/Done split, and Definition of Done. Again you’ll see a drag/drop theme allowing you to re-order columns.

The same applies to the swimlanes configuration.

As a bonus, you can rename a Kanban column directly on the board by clicking the hearer:

Charts and General

Under “Charts” and “General” you’ll be able to configure the CFD chart as well as the Backlogs (opt in/out of backlogs), Working Days and how your bugs appear (Backlogs or Task boards or neither). These settings used to be scattered around the UI, so it’s great to have a single place to set all of these options.

Tasks as Checklists

If you use Tasks as checklists, then this is a great new feature. Each Story (or Requirement or PBI, depending on your template) card now shows how may child Tasks it has. Clicking on the indicator opens up the checklist view:

You can drag/drop to reorder, check to mark complete and add new items.

Task Board Enhancements

The Task board also gets some love – conditional styling (just like the Kanban cards) as well as the ability to add a Task to a Story inline.

More Activities Per Team Member

You can now set multiple activities per team member. I’ve always thought that this feature has been pretty limited without this ability:

Now you have a real reason to use the Activity field on the Task! The Task burn down now also shows actual capacity in addition to the ideal trend:

As a bonus, you can now also add new Team members directly from the Capacity page – without having to open up the Team administration page.

Team Dashboards

The “old” Home page (or Team Landing page) let you have a spot to pin charts or queries or build tiles. However, you couldn’t really customize how widgets were positioned, and if you had a lot of favorites, the page got a little cluttered. Enter Dashboards. You can now create a number of Dashboards and customize exactly which widgets appear (and where). Here I’m creating a new “Bugs” dashboard that will only show Bug data. Once you’ve created the Dashboard, just click the big green “+” icon on the lower right to add widgets:

Once you’ve added a couple of widgets, you can drag/drop them around to customize where they appear. Some widgets require configuration – like this “Query Tile” widget, where I am selecting which query to show as well as title and background color:

Here I’m customizing the Query widget:

You can see how the widgets actually preview the changes as you set them.

To add charts to a Dashboard, you need to go to the Work|Queries pane, then select the chart and add it to the Dashboard from the fly-out menu:

Similarly, to add a Build widget to the Dashboard you need to navigate to builds and add it to the Dashboard of your choice from the list of Builds on the left.

Now I have a really cool looking Bugs Dashboard!

Test Result Retention Policies

There is a tool for cleaning up test results (the Test Attachment cleaner in the TFS Power Tools) – but most users only use this when space starts running low. Now you can set retention policies that allow TFS to clean up old run, results and attachments automatically. Open up the administration page and navigate to the Test tab:

Team Queries: Project-Scoping Work Item Types and States

If you have multiple Team Projects, and at least one of them uses a different template, then you’ll know that it can be a real pain when querying, since you get all the work item types and all the states – even if you don’t need them. For example, I’ve got a Scrum project and an Agile project. In RTM, when I created a query in the Agile project, the Work Item types drop-down lists Product Backlog items too (even though they’ll never be in my Agile Team Project). Now, by default, only Work Item Type (and States) that appear in your Team Project show in the drop-down lists. If you want to see other work item types, then you’re doing a “cross-Project” query and there’s an option to turn that on (“Query across projects”) to the top-right of the query editor:

Policies for Work Item Branch

Now, in addition to Build and Code Review policies for Pull Requests in Git branch policies, you can also require that the commits are linked to work items:

You can also just link the PR to a work item to fulfill the policy.

Labeling and Client-site workspace mapping in Builds

The build agent gets an update, and there are some refreshed Tasks (including SonarQube begin and end tasks). More importantly, you can now label your sources on (all or just successful) builds:

Also, if you’re building from a TFVC repo, you can now customize the workspace mapping:

And Stay tuned, because will come some other features in Update 2 that are in VSO now like :

Build widgets in the catalog

As Karen wrote about in the dashboards futures blog, one area we’re focusing on is improving the discoverability and ease in bringing different charts to your dashboard. With this update, you’ll see a new option to add a build history chart from the dashboard catalog, and you’ll be able to configure the build definition displayed directly from the dashboard.

Markdown widget with file from repository

The first version of the markdown widget allowed custom markdown stored inside the widget. You can now choose to display any markdown file in your existing repository.

Or add the file to any dashboard in your team project directly from the Code Explorer.

Check out all the January features in detail for Visual Studio Team Services:

Microsoft announced the Azure Stack at its Ignite event Last year , for running something like Azure on-premises, but how does it differ from the existing Azure Pack, which kind-of does the same thing?

This answer goes to the heart of how Microsoft is changing to become a cloud-first company, at least within its own special meaning of “cloud”. Ignite attendees heard about new versions of Windows Server, SharePoint, Exchange and SQL Server, and the common thread running through all these announcements is that features first deployed in Office 365 or Azure are now coming to the on-premises editions.

Why azure pack and azure stack?

We all living in cloud computing world now. IT people talk about “Cloud” more often. Microsoft Azure is in the top of the list providing proven stable cloud services. It includes IaaS (infrastructure as a service), PaaS (Platform as a service), SaaS (Software as a services) and lot more cloud related services. As we all know azure been very successful with availability, security, performance etc. But most of enterprises, businesses are already done lot of investment to build their infrastructure. This is much more valid for managed service providers. So instead of moving all the service to cloud, people are started to more interest on hybrid-cloud model. So some services will be using public cloud services and same time some services will be run from the datacenter.

To address hybrid-cloud model Microsoft decided to bring the azure technologies to the public so companies can use same technologies used in azure in their own datacenters. So the result was “Azure Pack”.

According to Microsoft,

Windows Azure Pack provides a multi-tenant, self-service cloud that works on top of your existing software and hardware investments. Building on the familiar foundation of Windows Server and System Center, Windows Azure Pack offers a flexible and familiar solution that your business can take advantage of to deliver self-service provisioning and management of infrastructure — Infrastructure as a service (Iaas), and application services — Platform as a Service (PaaS), such as Web Sites and Virtual Machines.

This was big relief for the MSP as they can offer a portal to their customers to manage their resources efficiently.

Azure pack is mainly depending on the infrastructure which is running based on windows server and system center. It uses system center virtual machine manager to manage virtual machines. It uses system center service provider foundation service to integrate all the related operations between portals and services. Following are some great features of azure pack.

Well azure pack was the first big step toward the path, but the technology keeps changing every day. With new version of windows server software defined storage, software defined networking will do revolution change. To face this new requirement solution is the Azure stack.Microsoft keep sharpening up the azure platform. With azure stack, it will bring same proven cloud capabilities to the hybrid-cloud.

Azure Pack was “an effort to replicate the cloud experience,” Microsoft’s Ryan O’Hara (senior director, product management told the press at Ignite. By contrast, Azure Stack is “a re-implementation of not only the experience but the underlying services, the management model as well as the datacenter infrastructure.”

In other words, there is more Azure and less System Center in Stack versus Pack, and that is a good indication of Microsoft’s direction. That said, Microsoft’s Azure Stack slide says “powered by Windows Server, System Center and Azure technologies,” so we should expect bits of System Center to remain.

According to Mike Neil, General Manager for Windows Server, Microsoft

Microsoft Azure Stack extends the agile Azure model of application development and deployment to your datacenter. Azure Stack delivers IaaS and PaaS services into your datacenter so you can easily blend your enterprise applications such as SQL Server, SharePoint, and Exchange with modern distributed applications and services while maintaining centralized oversight. Using Azure Resource Manager (just released in preview last week), you get consistent application deployments every time, whether provisioned to Azure in the public cloud or Azure Stack in a datacenter environment. This approach is unique in the industry and gives your developers the flexibility to create applications once and then decide where to deploy them later – all with role-based access control to meet your compliance needs.

Built on the same core technology as Azure, Azure Stack packages Microsoft’s investments in automated and software-defined infrastructure from our public cloud datacenters and delivers them to you for a more flexible and secure datacenter environment. For example, Azure Stack includes a scalable and flexible software-defined Network Controller and Storage Spaces Direct with automated sync and failover. Shielded VMs and Guarded Hosts bring “zero-trust” software-defined security to your private cloud so you can securely segment organizations and workloads and centrally control and monitor access and administration rights. Furthermore, Azure Stack will simplify the complex process of deploying private/hosted clouds based on our experience building the Microsoft Cloud Platform System, a converged infrastructure solution.

Inside of azure pack it was “depending” on system center services. But Azure Stack will not “depend” on system Center but it is possible to integrate it with operation management suite and system Center .

Despite this disparity, Microsoft’s general approach seems to be to evolve and optimize server products for Azure and Office 365, and then to trickle down features to the on-premises editions where possible. It therefore pays for developers and admins working on Microsoft’s platform to keep an eye on the cloud platforms, since this is what you will get in a year or two even if you have no intention of becoming a cloud customer.

This approach does make sense, in that characteristics desirable in a cloud product, such as resilience and scalability, are also desirable on premises. It may give you pause for thought though if the pieces you depend on have no relevance in Microsoft’s cloud. We have already seen how the company killed Small Business Server, for which the last full version was in 2011.

That brings us to Azure Stack, the purpose of which is to bring pieces of Azure into your data Center for your very own Microsoft cloud. The existing Azure Pack already does this, but this was essentially a wrapper for System Center components (especially SCVMM) that allowed use of the Azure portal and some other features on premises.

Are many Cloud Platforms used to build Custom Hybrid Cloud Scenarios, like Cloud Foundry used quite often by different Infrastructure providers Like INTEL, EMC², VMware, Swisscom, since 2 days comes another one from Microsoft – Azure Stack , which is great for #hybrid#cloud .

Microsoft has launched the public preview of Azure Stack, something that has been in TAP for several months now. You can find the download on MSDN right now or from download link.

Azure Stack is a collection of software technologies that Microsoft uses for its Azure cloud computing infrastructure. It consists of “operating systems, frameworks, languages, tools and applications we are building in Azure” that are being extended to individual datacenters, Microsoft explained, in the white paper. However, Azure Stack is specifically designed for enterprise and service provider environments.

For instance, Microsoft has to scale its Azure infrastructure as part of operations. That’s done at a minimum by adding 20 racks of servers at a time. Azure Stack, in contrast, uses management technologies “that are purpose built to supply Azure Service capacity and do it at enterprise scale,” Microsoft’s white paper explained.

Azure Stack has four main layers, starting with a Cloud Infrastructure layer at its base, which represents Microsoft’s physical datacenter capacity (see chart).

The Azure Stack software.

Next up the stack there’s an Extensible Service Framework layer. It has three sublayers. The Foundational Services sublayer consists of solutions needed to create things like virtual machines, virtual networks and storage disks. The Additional Services sublayer provides APIs for third-party software vendors to add their services. The Core Services sublayer includes services commonly needed to support both PaaS and IaaS services.

The stack also contains a Unified Application Model layer, which Microsoft describes as a fulfillment service for consumers of cloud services. Interactions with this layer are carried out via Azure Resource Manager, which is a creation tool for organizations using cloud resources. Azure Resource Manager also coordinates requests for Azure services.

After few months , when was announced that Prism over to new owners. The Pattern And Practices Team spend some of time and effort into identifying needs and vision around Unity and owners that would invest in the project and support the community.

Going Forward

Be sure to read the official announcement from the new team and follow their work on the new GitHub repo. Let them know what you’d like to see in future releases of Unity and help them continue to grow the community