Building on the exceptional success of last year’s edition, Global Azure Bootcamp 2014 (#GlobalAzure) is a free one-day training event, taking place on the 25th of April 2015 in several venues worldwide, driven by local Microsoft Azure community enthusiasts and experts. It consists of a day of sessions and labs based on the Microsoft Azure Readiness Kit or custom content. The event has been originally designed by 5 Microsoft Azure MVPs in order to benefit the local community members and teach essential Microsoft Azure skills and know-how. While supported by several sponsors, including Microsoft, the event is completely independent and community-driven.

Global Azure Bootcamp 2014 took place in March 2014, and ran at 136 locations in 54 countries on the same day, including countries like Nepal and Mauritius – possibly the largest community event ever. Approx. 480 organizers welcomed about 5,600 attendees. The event also featured a charity lab where attendees deployed virtual machines into Azure to help analyse data for diabetes research.

Just a few days ago the team in Redmond has announced the general availability for Azure Search and other new announcements along with it.

For the past few months I had the opportunity to talk, blog and answer questions about Azure Search while it was still under public preview. Today however, the service is no longer in preview and this means that the search-as-a-service solution managed by Microsoft is now fully baked with SLA, stable and less-changing REST API schema and models which can be concluded as: full-text search in a box.

The purpose of Azure Search is to help software developers implement a search system within their applications (whether web, mobile or desktop) without the friction and complexity of writing SQL, JavaScript (or anything else) queries and with all the benefits of an administration-less system.

Not only did the team make the service generally available, but they also added some more flavor to this release since it comes out with great new features such as an indexer mechanism which allows Azure Search to literally crawl for data in any modern data repository such as Azure DocumentDB, Azure SQL Database or SQL Server running on Azure VMs and also the concept of suggesters (previously under preview in the 2014-10-20-Preview API version – I wrote about suggesters in the Azure Search Client Library update announcement here) which allows users to specify a suggest algorithm upon running the suggest operation available in Azure Search.

For the past month I had the opportunity to talk about Azure Search via the Azure DevCamp roadshow put together by the ITCamp community, with the sponsorship of Microsoft. Not only did they put together a great event series, but I also had the chance to meet wonderful people interested in cloud computing across the entire country: Bucharest on February 13th, Oradea on February 20th, Timisoara on the 21st and Cluj-Napoca on the 28th.

Below are my slides (in English) and further down this post is the video recording of my presentation in Cluj-Napoca. For whatever strange reason related to my Surface’s OS going to sleep just before my presentation and not being able to find a particular .dll file part of Newtonsoft’s JSON.NET, one of my demos didn’t run as expected in Cluj-Napoca – even though everything went smooth during the other three events. I’ve also posted a few photos from some of these events in a photo gallery at the end of this post.

I’m happy to announce that ASCL (Azure Search Client Library) has received a new update, namely 0.8.5522.36498. Using the newer version you can now enjoy suggestion algorightms without worrying about the little bugs :), use suggestions using the freshly announced ‘Suggestor’ functionality, use Tag boosting and take complete advantage of the multi-lingual support of Azure Search.

Along with this update I’ve also written two new ‘Getting started’ projects which help you better understand how to use ASCL.

I’ve received earlier this morning a very nice e-mail from the DevSum 2015 committee stating that I’ve been accepted as a speaker for this year’s conference. Therefore, if you happen to be in Stockholm between May 25th and 27th, I’d love to see you at the Clarion Hotel Sign at DevSum 2015.

DevSum is a yearly conference, already at the 10th edition, taking place at the end of May in Stockholm, which also happens to be (according to www.devsum.se) ‘the largest and most enjoyable .NET conference’ in Sweden :-). During the two-day conference, attendees get a chance to learn about .NET, Azure, Architecture, XAML, Puppet, JavaScript and more from a panel of 40 Swedish and international speakers (actually, Scott Hanselman himself happened to be one of the speakers of the previous editions). My presentation is going to be about ‘Patterns For Scalability In Microsoft Azure Applications’; even though I went through this subject a dozen times already during the previous 12 months, I’ll put together a revamped presentation with some extra ‘knowledge’ I learned from my experience with cloud computing.

Along with fellow community leaders and speakers and with the support of Microsoft Romania, I’m putting together the first community organized Azure-centric event in Oradea for 2015!

Come and join us at Azure DevCamp Oradea

Part of a series of seven events taking part across the entire country (Bucharest, Oradea, Timisoara, Targu-Mures, Cluj-Napoca, Sibiu, Brasov), Azure DevCamp Oradea is your chance to learn more about the freshly announced services in Azure:

Azure DevCamp Oradea will take place on the 20th of February, at Hotel Continental Forum (1 Aleea Standului), will start at exactly 16:00 and is completely FREE of charge. However, registration is required prior to the event and can only be done at http://aka.ms/oradea-20-februarie.

Here’s your chance to learn more about Web Development with Microsoft Azure, during an intensive 2-day course held in Cluj-Napoca. The course is organized by Avaelgo and is currently set to take place at the end of March. Even though still in a draft form, the agenda is vast and covers most IaaS, PaaS and SaaS services available in Azure and hence I totally recomment it.

Learn more about the Avaelgo and the Web Development with Microsoft Azure course here.

Have you ever used a public WiFi in a coffee shop? Or did you use one in an airport, hotel, restaurant or a museum? I bet you were wondering how safe these networks are and whether your HTTP traffic can be sniffed by anyone nearby! Well, to keep the answer short, public WiFi network at anything but safe and the traffic can be sniffed with ease in almost any available public WiFi.

On the other hand, did you ever try to watch movies on Netflix, listen to some good music on Spotify or Internet radio on Pandora from an Eastern-European country (e.g. Romania), just to find out that these services don’t work in Romania (yet)?

Well, Azure is here to the rescue! During this article you’ll go through all the steps necessary to create a VM hosted in one of Azure’s data centers so that all your Internet traffic goes through a secure VPN tunnel to the data center. In the end, this basically means that your traffic will look as if originates from within Azure and thus you’ll be able to use the kind of services mentioned earlier.

The infrastructure schema of what we’re trying to achieve looks something like this (please try to bear with me here – I’m totally aware my drawing skills are close to nonexistent):

Prerequisites

There are a few requirements in order to successfully complete this step-by-step guide:

you will most certainly need an Azure subscription. You can either use a 30-days free trial account or a Pay-As-You-Go account. Additionally, if you have an MSDN subscription, you can also use your Azure credits, which are part of your benefits as an MSDN subscriber. Here’s a link on how to sign-up for a 30-day trial account today using your Microsoft Account

a SSL certificate. Yes again, there are a few options here: the obvious one is to buy a SSL certificate from a publicly available Certificate Authority (CA), or to create a self-signed certificate which you’ll manually install in the Trusted Root Certificate Authority container. In order to create self signed certificates, you can either use makecert.exe, a utility which comes with any installation of Visual Studio 2013 (or 2012, for that matter) and/or Windows SDK, or selfssl.exe, part of the lightweight IIS6 Resource Kit.

I have to start off with two things I want you to bear in mind while you read this post:

this is my absolute first production deployment (ok, during these last 4 days I did hundreds of back-and-forth steps using Microsoft Deployment Toolkit (MDT) along with WDS in order to find the most manageable deployment architecture, but still…) of Windows 8.1 using MDT 2013

any comments are very welcome!

In order to take advantage of an easily maintainable and upgradeable, yet controllable IT infrastructure within the company, I’ve decided to deploy a few VMs running Windows Server 2012 R2 with the WDS role installed. I’ve also installed MDT 2013 (you can download it from here) and Assessment And Deployment Toolkit for Windows 8.1 Update (ADK – you can download it from here). ADK is required in order to get MDT 2013 to work. Also, make sure that you don’t have any older versions of ADK (such as, the ADK for Windows 8.0 which usually comes high up in the search results when you look for ‘ADK Windows 8.1’).

Installing both ADK (which should come first) and MDT 2013 is a child’s play, but only if you remember to sign out after you install ADK – this will force the PATH environment variable to get updated with the %ProgramFiles%\Windows ADK values. Trust me, this is a requirement for a smooth runtime experience with MDT 2013.

As a newcomer, one of the best approaches to learning MDT 2013 is by downloading the MDT Documentation archive from here, but bare in mind that there are a few best practices missing from the documentation kit and which will be extremely helpful on the long-run:

When you create your first deployment share, bare in mind to use a single-worded share (UNC path), different than the default ‘DeploymentShare$’. Same goes for the deployment share name and folder name. The reason is that you will eventually boot using a customized version of Windows PE (Pre-installation Environment) which might eventually show you the list of task sequences you have defined within your deployment. If you’re like me and like to test things out, you’ll probably don’t want your production images to be mixed with the staging ones. Therefore, I’ve created a deployment share called ‘MDT Staging’.

The deployment share is nothing else than the name suggests: a share – a network share to be specific. This basically means that whilst deploying the customized images of your OS, either you or your users will have to get access to the share. There are two options for this: you either manually send the share credentials out to your users, hoping that they won’t share this credentials with others and that they’ll get them right – why shouldn’t they? The second option is to configure the credentials within an initialization file called bootstrap.ini (which is actually configurable from within the Deployment Workbench directly – simply right-click on the deployment itself, choose Properties in the context menu, go to the ‘Rules’ tab and click the ‘Edit bootstrap.ini’ button). Here you can simply put the following value defaults: UserID, UserDomain and UserPassword. You might argue that this represents a security vulnerability because I’m saving a set of credentials which have access to one of my shares in clear text format. I admit that, but as long as this specific only has read access to my share (and write access to the ‘Logs’ folder within the deployment share), there’s no actual reason to concern anyway. Additionally, this user doesn’t even have to be a directory account, it can be a simple local account with read-only access to the share. And since were at the bootstrap.ini, it’s also worth sharing that the SkipBDDWelcome=YES default will help a lot as well: specifically, it will skip the welcome message on the deployment wizard.

It might make more sense to go through the deployment as quickly and seamlessly as possible. Therefore, a few Skip defaults within the customsettings.ini (by the way, when you change anything within the ‘Rules’ tab in the main textbox, you’re actually updating the customsettings.ini, which is extremely convenient considering that you’d otherwise have to manually open and save a text file in an elevated Notepad) might help:

SkipAdminPassword=YES (if you also configure the AdminPassword default, this will force the Administrator password page to be skipped) – whether you’re creating a reference image or a target image, you’d probably be better with a unique administrator password, referenced within the Workbench rather than a bulky handwritten notepad somewhere in your office drawer

SkipProductKey=YES – whether you’re creating a reference image or a target image, the product key will probably be a MAK which you could safely put in the task sequence (you don’t want your curious users to write this MAK down and use back at their home, right?) or you might even use a KMS to activate your OS. If you don’t have a key altogether, don’t bother going through this deployment wizard page anyway: the installer will ask for it and you can just skip this step until you activate the OS

SkipDomainMembership=YES – it’s best to have the domain configured directly within the customsettings.ini file using the JoinDomain, DomainAdmin and DomainAdminPassword values. Keep in mind that Admin in DomainAdmin doesn’t mean that you need to put in your admin user’s password: instead, simply create a user within your Active Directory which is only allowed to Create Computer objects and Delete Computer objects, along with the option of configuring properties (read/write properties) on all your computers within the OU. This basically means that this will be a special user only allowed to join computers in the domain which helps a lot in automating the deployment process

SkipLocaleSelection=YES

SkipTimeZone=YES – instead, simply configure the time zone using the TimeZoneName default (e.g. ‘E. Europe Standard Time’). Remember that within Windows, you can get your current timezone and the names of the rest of the time zones using the tzutil command. After all, you’ll most likely deploy the computers based on a deployment share only within a single time zone.

SkipApplications=YES – makes this part of your task sequence instead; I’ll have more on this later on

SkipRoles=YES – same as before, make this part of your task sequence instead

SkipBitLocker=YES

SkipBDDWelcome=YES

If you’re configuring a target deployment (which, as mentioned at #1, should be a different deployment share for the best deployment experience), make sure that you’re also configuring:

SkipCapture=YES – after all, you can both configure the DoCapture default to whatever you’d like your tasks sequence to end with and, again, having a simple wizard will be way more easy to manage on the long-run

You might test out different default values and different task sequence options before you actually deploy to your hardware devices, so having some of this defaults configured to NO or not at all (such as, the domain defaults – you probably don’t want to add all your tests to your directory) might make sense. However, rather than deleting them from your file, you can comment them out using the ‘;’ symbol. This is also super helpful when you create a new deployment share, because you can simply either comment-out or un-comment settings based on your deployment share target.

When it comes to the actual deployment shares, there are a few things worth sharing:

First and foremost, make sure that you always test your deployments using a VM (Hyper-V is probably one of the best virtualization technologies you can use for free right now for this purpose, especially due to the fact that Gen2 VMs can both PXE boot and are UEFI capable). This is a best practice due to the fact that you can always create a checkpoint and revert the machine back and forth just to make sure that your deployment works fine. It doesn’t make sense to wait too long for your reference deployment to be created just to find out that a variable or whatever application is messing the entire process. Additionally, using a VM will assure you that only the most generic hardware drivers will be used and no funny mouse-or-whatever-device drivers get injected if you’d use an old-PC to test your deployments (actually, you shouldn’t use an old-PC to deploy anything; you’d better get rid of it :-)).

And since we’re talking about drivers, whatever you do, never ever add drivers to your reference image. Instead, add them to your target image only, because you might eventually need to buy a new PC which might have different specs than the original one: do you really want to create the entire reference image from scratch and install all the apps used within the company again?

If you’re using PCs from known vendors (HP, Dell, Fujitsu, Lenovo etc.), make sure that you get the corresponding drivers from the enterprise support systems. In fact, there are some apps for that too, such as HP SoftPaq, ThinkVantage Update Retriever, but if you’re not able to use any of these, simply go through their enterprise support websites (here’s the one for Dell)

Never ever download drivers from strange websites or aggregates (Softpedia and such). If the vendor has a website, use that website instead!

As a best practice, I’d also advise you to group all the drivers in an OS\Computer model hierarchy. Also, make sure that the model is exactly the same to the model specified by the vendor. You can get the model specified by your vendor by using the Get-WmiObject PowerShell cmdlet (Get-WmiObject -Class Win32_ComputerSystem).

Another best practice is to create task sequences based on the PC models you have in the company, considering these are brand PC from known vendors rather than custom-made PCs. The cool trick here is in regard to drivers: you can control the drivers which exist in the driver repository Windows is looking into when it first installs by changing the following:

In the Preinstall step within a task sequence, go to Inject Drivers and change the default selection profile to ‘Nothing’, and also check the radio button option of ‘Install all drivers from the selection profile’. This might at first not make any sense, because we’re actually telling the deployment process to get all the drivers only from nowhere (?!), but the fact is that

you configure (before the Inject Drivers phase) a Task Sequence Variable (from Add > General) and name it DriverGroup001 and give it the value of Windows 8.1\%model% (considering that you’re using an OS\Computer model hierarchy as advised earlier).

this will basically instruct Windows to look only in a computer model’s specific folder for drivers, not in the entire repository of all the drivers for all the PC you’re using in your company

unfortunately, if you’re using a custom-made PC you’ll get generic computer model names instead, such as ‘All Series’ if you have an Asus motherboard.

Earlier in this post I mentioned that it’s fine to skip the applications selection page. The idea is actually to get better control of the applications you’re installing and also more insights into the applications which have quite installers. Basically, rather than having the deployment process install the applications on your behalf as a bulky operation, you should create a new group right before the Windows Update (Pre-application installation) phase called ‘Custom tasks (Pre-Windows update) and have all your applications installed as Install Single Application phases. If you don’t like/need/want that kind of control, you could also create an application entry in the application group within the deployment share which depends on all the applications you want to install and have this application created as a install single application phase in your new group. Of course, you might be wondering now why you’d do that: the reason is that if you’re installing Microsoft applications (which you probably will), you should get updated for these application too. You might be also installing chipset drivers, and this application-driver type should be installed first.

Anyway, the idea of having applications installed as install single application phases is to gain better control of the application installation process and finally to automate the entire deployment process altogether.

Another cool trick available in MDT (and not available in SCCM, at least not to my knowledge) is that you temporary suspend the deployment process for cases in which, let’s say, you need to manually download and installer or ClickOnce application or whatever. All you have to do is to copy the Tatoo phase in the task sequence, paste it wherever you need the deployment process suspended and replace the ZTITatoo with LTISuspend in the command line. This will automatically suspend the deployment process, allow you to run whatever tasks manually and when you’re done (even if you need to restart) just double-click the resume shortcut which was created on the desktop (this automatically resumes the deployment process from where it was left off). This tricks helps install ClickOnce applications which require licensing (they normally exit with any of the 0 or 3010 codes too soon and thus don’t get installed properly) or install apps or SDKs using Web Platform installer (such as, Azure SDK).

Last but not least, make sure that you select the Windows Update options in the task sequence of your deployment process to the target computers only. Downloading them during the deployment process on the reference computers will force the deployment process to take considerably longer (for example, it took in my tests an extra 3 hours to create the reference image if the computer was updated during reference image deployment) and thus doesn’t make too much sense. Instead, you might be interested into updating the target computers only. Moreover, you could also add the update packages (though it is tremendous work to keep the Packages folder up-to-date in the deployment share) or you could install the Windows Server Updating Services (WSUS) role on one of your servers and mark the update server URL within the customsettings.ini file using the WSUS Server default.

Junior software developers help design and maintain software applications. You might also speak with customers to gather user requests for different features to improve speed, performance and usability. Junior software developers also conduct system tests, troubleshoot customer issues and correct software defects. Other duties include creating customer software manuals and project documentation, and developing prototypes for new software technologies.

Technical Tasks

Designs, develops and modifies modules based on functional and system requirements.

Work closely with the Team Leader, Business Analyst and Product Owner for understanding the functional and system requirements.

Work closely with the Architecture Team to ensure architectural integrity and product quality.

Participate in testing process through unit testing and bug fixes

Other Tasks

Participate in daily scrum meetings

Participate in sprint planning

Work closely with the QA team, Product Management team, and the Research and Development manager to ensure quality and punctual software development within his/her responsibilities.