https://www.wissa.net/Ghost 0.9Fri, 22 Feb 2019 11:45:47 GMT60If you are a user of Windows Live Custom Domains, you may have heard that Microsoft is retiring this service and users are encouraged to sign up to Office 365 as an alternate service. If you’re thinking about switching your services then I recommend you hold off and read]]>https://www.wissa.net/trouble-moving-from-windows-live-custom-domains-to-office-365/81750aa9-6b5e-43fd-91ef-fa78df64055aSun, 29 Jun 2014 15:40:11 GMTIf you are a user of Windows Live Custom Domains, you may have heard that Microsoft is retiring this service and users are encouraged to sign up to Office 365 as an alternate service. If you’re thinking about switching your services then I recommend you hold off and read the points below first! I tried to switch and I got burned by the process. Hopefully the comments below are of help to you Dear Reader.

Signing up to the service

As it turns out, the initial sign up process is the easiest thing to do. You can choose a plan that suits, such as the ones mentioned in the Office 365 link above. You also have an option of a 30 day trial. The next step after signing up is to start adding your domains to the service through the Office 365 portal login page. If like me, you want to do this for services that were previously hosted using Windows Live Custom Domains, then BEWARE! AND READ ON!

Adding websites to Office 365

To add websites to Office 365, the first thing you need to do is verify that you own the domain name you wish to add. To do this, the initial step is to choose how you will manage your DNS records and then update those to point your website’s *TXT/MX *records to Office 365. Once this is completed, the next step is to verify the domain ownership through the Office 365 portal after the TXT/MX records are added. When I did this, I kept getting the error below.

Sorry, we couldn’t verify the domain name

The domain is already being used with another Microsoft hosted service, such as Office Live or Live@EDU, and a domain can be associated with only one service. Try one of these things:

– Remove the domain from the other service, and then try again to verify the domain.

– If you didn’t add it to another service, ask a question in the Office 365 community. Include this error message so others can help with the issue.

To cancel your service in Windows Live Admin Center, you need to perform the following steps:

Sign in to Windows Live Admin Center by using your administrator Windows Live ID.

Under "Your domains", click the appropriate domain.

In the navigation pane, click Domain Settings.

Click Cancel Service, and then click Yes.

WARNING: trouble, trouble, trouble!!

I’m not really sure why you have to cancel the Windows Live Custom Domain service before being able to verify your domain on Office 365. However, this process broke everything for me. Be careful before you do take this step*. *I didn’t hesitate in following these cancellation steps because I read on this Move your custom domain to Office 365page the following text:

Move your custom domain account holder data.

Custom domain account holders can continue to use their email addresses to access other services that require a Microsoft account, such as Windows 8.1, Xbox,OneDrive, and Skype.

After you have set up your custom domain in Office 365, email sent to your account holders will arrive in Office 365 instead of in Outlook.com. Existing email already delivered to Outlook.com will remain there.

Your custom domain account holders have the option to move their email history, contacts, and calendars from their Outlook.com inboxes to their new Office 365 accounts. Each individual account holder must move their own data. As domain administrator, you’ll need to instruct your account holders to move their email, if they want.

So, reading the above, I thought everything will be ok and I cancelled my Windows Live Custom Domain services without thinking to much about the process. It didn’t register to me at the time that this cancellation is effectively going to delete your custom domain emails. Fortunately, I do have a recent backup of my email data locally so I didn’t lose that many emails! Not only will cancelling your Windows Live Custom Domains services result in your emails being lost. They will also force you to rename (effectively recreate) your Custom Domain Live ID to another Outlook live ID alias. After you log on you will be greeted with NO EMAILS in your inbox. The process of restoring lost emails after an account rename didn’t even make any difference either.

To make matters even worse, after further digging, I came across this thread from others who have had similar issues, according to a response from Microsoft in there,it could take up to 90 days for the Windows Live Custom Domains data to fully deleteand the association to be removed so that the domain can be verified in office 365!!! An insanely long amount of time.

Summary

In summary, thanks to misleading statements on websites from Microsoft, I thought I can easily move from Windows Live Custom domains to Office 365. To me, the instructions imply that you should be able to switch TX/MX records from pointing to your Live Custom Domain details to Office 365 and then switch back if things go wrong. Being able to successfully do this would also mean that your email data wouldn’t get lost in the process. However, thanks to the dependency in Office 365 of verifying your domain name on no references existing elsewhere in Microsoft databases, this process is basically impossible and wont work.

Thanks to everything mentioned above, the only option I have for the time being while waiting for the 90 days to pass – or for some divine intervention! – is to find another host for my primary email. I’ve had a look around and it looks like Zoho Email is a good option so I’m using that now.

I really hope that Microsoft removes this dependency for verifying domain names in Office 365 so that one doesn’t get forced to delete their previous service until everything is up and running. This would also make the behaviour consistent with what’s implied in the instructions! I’ve asked around for some help and will update this post if I receive info that helps resolve this.

So there it goes, YOU HAVE BEEN WARNED, the rest is up to you!

]]>Sometime back, I wrote about Getting Involved in Open Source. Since then, I’ve been wanting to take Scott Hanselman’s advice on board but to date I have not found a particular project that I felt comfortable participating in or one that fit all my interests. If you look]]>https://www.wissa.net/introducing-developerbloggers-com/7f648ccd-047a-426f-ac50-19eafbe4e26dFri, 02 May 2014 08:51:44 GMTSometime back, I wrote about Getting Involved in Open Source. Since then, I’ve been wanting to take Scott Hanselman’s advice on board but to date I have not found a particular project that I felt comfortable participating in or one that fit all my interests. If you look at how much is available out there it is a bit overwhelming! As a result, while still trying to decide which open source project I wish to participate in, I figured maybe I should start working on something of my own.

Welcome to *devbloggers.com*

As a software developer, one of the things I like to do on a regular basis is to read blogs. However, with so many bloggers and different resources out there it can be hard to find blogs to follow, especially when they are not very well known ones or ones that are not in our immediate circle of connections such as of local user group members, tech employees …etc.

This is where developerbloggers.com comes in. The basic idea of the site is to help you find blog resources and to help bloggers be found also. Reading this, you will probably say the idea isn’t very new. That’s true in some way. However, the goal of developerbloggers.com is not to be a content curator. You can find out more about developerbloggers.com on the About page.

It’s still early days for the site, but rather than wait for it to be perfect – is there such a thing? I decided to put the idea out there now. I hope that you all find this site useful and I look forward to it becoming a useful resource for you and me for our daily blog reading habits. Here are some of the ideas as mentioned on the About page.

The intention of this site is to provide a way for bloggers to be found and have their content read and discovered easily. Rather than a content curator and inspired by Scott Hanselman’s post – When is it stealing? – we hope we can drive users to you!

Below is a list of ideas some for the site we are looking to implement

– Ability to extract an OPML/XML list of bloggers to add to RSS feed readers

– Addition of twitter profiles for bloggers that also tweet

– Form to allow users to add their own developer blogs to the list

– Add categories to the blogs to allow sorting/filtering by them

– Personalised user profiles/logins to allow users to mix and match and pick blogs that math their own liking and export these to an OPML file

– Adding the Microsoft/MVP logos next to blog profiles of Microsoft employees or MVP bloggers

– Improved web design/layout

– Open Sourcing the code of the website so you can also contribute to the project and help make the site better

– Dedicated device apps for getting the site’s content

– And more…

In the mean time, while other features are still WIP, if you know a blog that should be on this site which currently isn’t part of the OPML list or have any ideas for making the site better then please share it in the comments or contact me via the Contact page.

Let me know what you think.

]]>

If you’re a software developer using the Microsoft development platforms you are probably either currently at the Microsoft Build 2014 conference or like me you might be just catching up on the conference content through the coverage and videos on Microsoft’s Channel 9. Last month Channel 9 turned

If you’re a software developer using the Microsoft development platforms you are probably either currently at the Microsoft Build 2014 conference or like me you might be just catching up on the conference content through the coverage and videos on Microsoft’s Channel 9. Last month Channel 9 turned 10 and while I was browsing the site for some content I stumbled across a set of interesting videos that cover the History of Microsoft from the year it was founded in 1975 all the way to the year 1999. Following all the stuff from the new Microsoft at the Build 2014 conference might be interesting, but how about a look back about how it all started? The videos on Channel 9 cover a lot of that, to make it easier to go through them I’ve grouped them all below in this post. Enjoy!

]]>So earlier today Australia time the Microsoft Build 2014 conference kicked off with lots of new announcements. If you haven’t watched the 3 hour keynote and are interested to know what was covered here’s a quick summary of the presentation.

Windows Phone 8.1

Cortana: The Windows Phone

]]>https://www.wissa.net/microsoft-build-2014-keynote-summary/054fddcd-c963-4c3f-9c03-26becca6a7aeThu, 03 Apr 2014 21:37:50 GMTSo earlier today Australia time the Microsoft Build 2014 conference kicked off with lots of new announcements. If you haven’t watched the 3 hour keynote and are interested to know what was covered here’s a quick summary of the presentation.

Windows Phone 8.1

Cortana: The Windows Phone personal assistant revealed. Uses Bing at it’s core and is different that competitor products in that it is able to better understand users as opposed to just being a voice command system. The Verge’s The story of Cortana, Microsoft’s Siri killer.

Release Date: Windows Phone 8.1 will be released with new devices in April/May, will roll out to other devices over the next few months

IE11

IE 11 will be updated with an Enterprise Mode feature which will be similar to Compatibility view and will make IE11 behave as if it was IE8

Windows 8.1 (updates)

Metro on Desktop, new context menus: Windows 8.1 is getting many updates including ability to run Metro apps from the desktop and the introduction of new contextual menus to make it easier to access some windows features as well as simpler access to the Windows Store. To see all the changes in detail check out Paul Thurrot’s Windows 8.1 Update 1 Review.

Software Development

Universal Windows Apps: With all the changes going on at Microsoft, the company is trying to help developers build applications that work on all form factors. During the keynote address they introduced Universal Windows Apps which aims to help developers achieve this.

Following all the announcements, information and demos by all the speakers at the Build 2014 keynote, Satya Nadella, the newly appointed Microsoft CEO came on stage for a Q&A session with developers. It was a nice way to wrap up the keynote and gave a nice portrayal of a friendly Microsoft. Well done on a great keynote and despite not being there physically, I look forward to following the rest of the conference.

Microsoft took over 6 months to announce a replacement for Steve Ballmer since he announced his retirement back in August. So, after such a long search period why go with an internal candidate? Before we dig into that. Here’s what Bill Gates, Steve Ballmer and John Thompson had to say about Nadella.

*Steve Ballmer
*Satya is a proven leader. He’s got strong technical skills and great business insights. He has a remarkable ability to see what’s going on in the market, to sense opportunity, and to really understand how we come together at Microsoft to execute against those opportunities in a collaborative way.

Bill Gates

Satya’s got the right background to lead the company during this era. There’s a challenge in mobile computing. There’s an opportunity in the cloud.

John Thompson

He has the technical expertise, the product experience and the leadership attributes we were looking for.

These comments collectively portray Nadella as someone with a lot of characteristics that should be in a CEO for Microsoft. As a software as well as a devices & services company Microsoft would benefit from someone with a* technical background. Being a company that has lacked some edge in some of its core product offerings Microsoft would benefit from a CEO who has *vision and can see opportunities. *For a company as large as Microsoft with thousands of employees worldwide, having a CEO that is able to *bring people together to collaborate who also understands the company inside-out is vital. It all sounds so perfect doesn’t it? I think it does!

The Competition

If you’ve been following the coverage about the CEO search for the past few months you would know that among the other leading candidates in the search were Ford CEOAlan Mulally, Nokia CEOStephen Elop and Microsoft BDE Exec and former Skype CEOTony Bates. The main reasons for each of them were Mulally’s successes at Ford and influence on Ballmer’s leadership, Elop’s previous run at Microsoft and his work as CEO at Nokia and Microsoft’s acquisition of Nokia’s devices and services business.Tony Bates for turning around Skype, his background and also being familiar with Microsoft culture and having a non traditional look.

Looking at the competition, I think the choice of Mulally would have been the riskiest one for Microsoft. Microsoft is a pretty complex company and having an outsider take the reigns of CEO would bring along nothing but risk. As I understand, most of those who saw Mulally as the perfect candidate are the ones who wanted to see major change at Microsoft, to see big decisions …etc. Whilst Mulally could bring such change I think a change as dramatic as might have been anticipated would just tear Microsoft apart. Such change would’ve made sense if the company was in bad shape but you have to remember that Microsoft as a whole is not struggling. With Mulally aside, my view is that Tony Bates and Stephen Elop lack the diversity that Nadella has when it comes to familiarity with different parts of Microsoft. As CEO of Nokia Elop’s strength would’ve been the Devices & Services area and for Tony Bates it’s the mobile & communications area. So what about the rest of the company?

The Choice

Putting aside the competition, lets look at Nadella and what he can bring to the table.

On top of Nadella’s experience across Microsoft in a variety of product teams he has a technical background that is fitting for a CEO of a software (and devices & services) company. Whilst Ballmer was a tech savvy CEO he was more of a business/sales guy than a technical guy. Nadella’s technical background can help him bring new ideas to the table and in knowing what works and what wont.

Vision & Ability to See Opportunities

Looking at what Nadella has achieved in the Cloud & Enterprise division, you can see that he is capable of seeing opportunities, compete with rivals, and do it well. I like this interview which Nadella did back in 2006 which shows some of his thoughts about Microsoft and working for it back then – Satya Nadella running the Microsoft Dynamics Team. In addition his first interview as CEO provides some insight on how he thinks. Further to new things that Nadella can bring to the company, he is already well established in one area which is seen by some board members as a big opportunity for Microsoft, that of course being the cloud. When it comes to the *mobile challenge *Nadella has in his team the people he can rely on.

Leadership & Product Experience

There is probably not much to say here other than that his achievements in his previous roles at Microsoft in several product groups show that he is a can do person that is also able to work effectively with others. For a company as complex and as big as Microsoft being familiar with the company and able to collaborate is likely to produce better outcomes than being Steve Job’s like and destructing the company.

Age

Unlike Alan Mulally, Nadella is much younger and is not close to retirement so this gives him the potential of being around at Microsoft further in the future with the change to see change through. Mulally would’ve been a short term CEO at best which is not something Microsoft needed.

Passion

To me Nadella always comes across as someone with a lot of passion and energy, whilst different from Ballmer he’s still a very likable personality. Just watch some of his keynotes, interviews and conference speeches!

Enterprise Microsoft

Most of Nadella’s experience over his time at Microsoft has been in Microsoft’s enterprise divisions and not so much on the consumer side. Contrary to what many believe, I actually think that this experience is one area that made Nadella a strong candidate for CEO. Whilst many Microsoft observers see Microsoft as a struggling company on the consumer front, as an enterprise company they are doing very well. If you’re company is good at something, wouldn’t it make sense to make what you’re good at even better? I personally think it does. To most average people out there Apple is viewed as th*e consumer *company and Microsoft the *business *company. This is why a while back I wrote the post Breaking up Microsoft. I think one of Microsoft’s biggest challenges is changing the perception of who Microsoft actually is. Some of the changes they have done in 2013 help address that but there is more to be done. I think this is where Bill Gates’ new role as Technology Adviser might come in.

Putting it all together

Being a Microsoft veteran, a person who has a technical background, a young and energetic guy, a leader and a visionary executive with a great track record, a person who can strengthen Microsoft’s core abilities and create new ones, someone who can keep Microsoft familiar yet new and a CEO surrounded by a great leadership team working with him. I think Nadella was the best choice for CEO of Microsoft and I’m happy it’s him that has been chosen. I cant wait to see what Microsoft will bring along in this new chapter for the company.

To Satya Nadella, I say congratulations on your selection. To the rest of you, what do you think?

I personally believe that the amount of work required for certification is not worth it. I think one would be better off volunteering with open source projects that are well-known, working on documentation and unit testing. Having a good open source Project under your belt is far more valuable in my mind

So in an attempt to take Scott’s advice on board, my goal is to focus on that in 2014. In addition to being involved in open source there are also other ways of getting involved in the developer community which could also be useful. This is something I enjoyed doing to some extent when I was involved in the Christchurch .NET user group in New Zealand but I’ve done less of that since moving to Melbourne.

Want to Get involved? Here are things you Can do

If you’ve decided to get involved in the developer community/open source but are not sure where to start or what to do, don’t worry! The efforts of others have made it easier for you and I to get started. Below is a list of posts and resources that I’ve come across which may be of help. If you know of other resources please share them here so that we can help make it easier for those looking to find them in one place.

BLOG POSTS

The majority of the links below are from Scott’s blog, there may be others out there that are relevant to this topic but as a regular reader of Scott’s blog I think his posts should have most of what you need covered.

“In this production, Scott Hanselman and Rob Conery offer suggestions and advice on how you can get out there, and get involved. Blogging, Twitter, Github, StackOverflow, User Groups and Conferences: all of this can make you a happier, more productive developer and inspire you to take your career to the next level.” (PluralSight)

“Scott Hanselman is put to the test: give a 15 minute talk on a new subject! How does he prepare? What are his secrets? Find out!” (PluralSight)

As you can see, the two courses above are now in the PluralSight library, if you don’t have a PluralSight subscription I highly recommend that you try them out. I had a one month trial subscription and after watching some content decided to buy one. If you like watching videos to get familiar with tech content then PluralSight is a must have. At the very least you will get to watch the courses above. I did, and I think you can take away a lot from them.

I hope you find this post and the links grouped here useful in getting you started in Getting Involved. If you know of any additional resources or links I did not include here please share them in the comments below (web) and I’ll update the post accordingly.

I’ve previously blogged about the absence of the Windows 8 traditional start button/menu in the preview release of Windows 8. Now of course things changed since then and the U-turn in Windows 8.1 is not quite the same either. With the Windows 8 changes and prior to the announcement that Microsoft was brining back the start button I also posted my thoughts on why Microsoft removed the Start button/menu. With that mentioned, I think the approach Microsoft took with bringing back the start button really does meet the halfway mark in reducing the customer dissatisfaction without drifting away too much from their future vision of windows. Lets look at these changes in more detail.

The New Start Button

Below is a screenshot of what the new start button looks like in Windows 8.1. As you can see, this has evolved slightly in look from the start button that was originally there in the Windows 8 preview. What’s interesting here is that the old perception of clicking the start button to see the start menu is not what is going to happen if you click the button in Windows 8.1.

Instead, clicking the button will just take you back to the Start Screen. So you might be thinking, what’s the point of the button then? Well. Exactly! In this context, the presence of the button is just as useful as it’s absence. But this is where things get a little bit more interesting.

Accessing Key Functions

One of the frustrations that I had during my initial usage of Windows 8 was how to access key functions like the run-as command, control-panel and similar standard windows functions. It was only by chance that I discovered that these functions can be accessed using a context-menu by right-clicking in the bottom left corner of the screen. With Windows 8.1 bringing back the start button I think accessing these functions has become a little easier to find as the presence of the button tells users that there’s something to be clicked so users might intuitively – or not so much – try various ways to click that button. One other good thing here is that Microsoft tweaked this list of functions to make it more useful.

Switching Context between Metro (Modern UI) apps and Desktop

Another thing that was very annoying in Windows 8 was that moving between the Metro UI interface and Desktop felt very unnatural. The main reason for this is that the Desktop and the Start screen looked so different from each other due to both screens having different backgrounds. Thankfully, Windows 8.1 provides the ability for this to change. As you can see below, in my Windows 8.1 background on the start screen is the same as the one I have on the desktop.

This was not possible in Windows 8 but with new customisation options in Windows 8.1 it became possible to do. The main benefit in this feature is that the transition between Metro UI and desktop has become more blended in and is thus less obtrusive.

Customising the Start Experience

Another good thing in Windows 8.1 is that Microsoft has provided* some* flexibility for users in managing their start screen experience. In addition to using the same background as the desktop, you can now also* boot to desktop mode as well as a few other settings which you can customise through Navigation Properties* as you can see below.

One setting of interest from the list shown above is the Show the Apps view… setting. This setting combined with the go to the desktop setting are the closest you will get to having the old* Start Menu *back. If you chose to activate both of these settings then it is best to view the Start screen – see below – with the apps view as your new and enhanced Start Menu. Think of it as an easier way to navigate your apps than the old nested tree structure menu which is hard to drill down into!

Summary

In summary, in Windows 8 the start menu is dead and is not coming back. The Start screen tiles or the start screen app list are your new start menu. In addition, the start button is there only for your psychological benefit and it’s really not important at all. Lastly, the Windows 8 Modern UI (a.k.a Metro) is the new direction for Windows and whether you like it or not, you have to get used to it. The new changes Microsoft made to the Start button and the ways possible for it to be customised are there simply to make it easier for you to transition to Windows 8 modern UI. They just reduce the shock to the system over what Windows 8 did at launch. Still not happy with what the Windows 8.1 compromise is offering? Don’t stress, there are still ways to bring back the old start menu!

]]> In Part 1 of this post I’ve shared my assessment of how my career has progressed so far since I joined the IT workforce and I mentioned the things that I like and the ones I don’t. In this post I will expand further on the comments that]]>https://www.wissa.net/pondering-the-future-of-an-it-career-part-2/149cbe43-953e-404d-a068-751ac0de28caThu, 05 Sep 2013 10:10:07 GMT In Part 1 of this post I’ve shared my assessment of how my career has progressed so far since I joined the IT workforce and I mentioned the things that I like and the ones I don’t. In this post I will expand further on the comments that I mentioned under The Future and* How To Get There* sections.

In those sections I noted that I like working with new technologies and that I really enjoy doing R&D to find the best ways of utilizing technology to solve business problems as well as being passionate about improving things and making them better. In addition, I referred to some of the limitations I’ve faced over my career that I think are preventing me from walking down my desired career path. To address this I posed the question of how this can change, where I pointed out the possibility of pursuing things like Microsoft Certifications.

The Actual Problem

Before I talk more about certifications I first need to establish what I actually view as the problem I’m trying to address. Following on from part 1, in simple terms, the problem is that even though I’ve achieved and learnt a lot over my career in the past few years, I’m not where I want to be.

What Is It That I Want

So given the above – as established from Part 1 of this post – what I need is change. In order to be able to make that change I need to first define what it is I actually want, what is it that I wish to do. I’ve been thinking about this for a while and I think I have some ideas. Here are some of them in no particular order:

1) Work with new Microsoft Technologies

I’ve been interested in Microsoft as a company for a long time and since being involved in the Microsoft Student Partner program while studying at university; I wanted to work with MS technologies. Throughout my career so far I’ve been exposed to various offerings from Microsoft, but as of late, a lot of what I work with is somewhat old. In the developer space, Microsoft – and also other companies/organisations– have been constantly pushing out and adopting new tech and in recent times they have become much faster with new releases …etc. As a result, it becomes very difficult to continue being up to date if one doesn’t have the opportunity to work with the new offerings day to day. This is the reason I think certifications might be a good way to close that gap but can this be a catalyst for change?

When it comes to development in the Microsoft space there’s obviously a huge variety of offerings and one cannot know everything. Therefore, if I were to pick some areas I would like to be more involved in specifically I would be choosing the following: Windows Azure/*Web Apps, ALM and possibly also *Windows Store apps. The good thing is, all these topics have certification exams that cover them as can be seen in the latest visual studio developer certifications. The value of new technologies and latest trends is always debatable, but they exist for a reason and in my view, it’s important to be relevant & current.

The question then is, would an investment in these certifications be enough to facilitate the change I’m after? The hard yards can be done but before that, one needs to ensure that they’re going in the right direction, especially when one is self-sponsored. If not, then what else can be done?

2) SCRUM

Scrum is being increasingly used as a way for agile software project management/lifecycle approach. I’ve been exposed to SCRUM in a previous role and I think that approach makes sense in many ways. However, I’ve never been formally involved in SCRUM and would be interested in being exposed more to that. In saying this, from a knowledge.learning perspective, I believe that SCRUM certifications like Certified SCRUM Master/Developer would tie in well with the Microsoft Visual Studio ALM certificates available. So to me, these make sense to be bundled together.

3) Getting involved

In addition to the interests I’ve mentioned in points 1 & 2, I enjoy attending and being involved as much as I can in technology events. Where possible, I make an effort to attend local user group events, tech conferences …etc. I find these activities/events among the best ways to be familiar with what’s out there. I would love to do more of that on regular basis. That’s a reason I’m keen on change, as due to the way things are at the moment I’m not able to fulfil this interest fully. Work/life balance?

4) Learn, Share & Grow

Last but not least, I enjoy constantly learning, I’m always open for new ideas and for contributing/sharing what I learn and know. This is one of the reasons I maintain this blog, it’s my window to share things I’ve learnt about and to interact with people like you out there. I like to refer to it as my ‘Online Connection’. Recently I haven’t been able to blog as much as I would like but that’s something I’m working on changing.

I also value learning from others’ experiences and from making mistakes as well as learning by demand – when facing challenges/tasks that I don’t know how to resolve and am in pursuit for solutions. This is applicable in the workplace – solving business problems – and outside that too.

When it comes to sharing, I believe that this makes a lot of difference in many ways. From my experiences in the past, I have found that sharing is very valuable for achieving good outcomes. I like to think of it as this: *an idea in your mind is worth nothing when it stays there. If you let it out, it could be worth something. *I have learnt that it pays off to speak up! It has made me appreciate that I’m a good thinker who is capable of adding value. That’s why teams that collaborate well are able to produce good results. Thus, learning and sharing facilitates growth.

Now whether or not the things I mentioned above are sensible and achievable, I guess it depends. I now know that’s what I want/might need to realise my potential but I would be interested to know your thoughts.

]]>For a while now I’ve been trying to think of how best to shape the future of my career. Over the past 7+ years I’ve worked in many environments on various roles and responsibilities. However, I now feel that it’s time for a change. After working in]]>https://www.wissa.net/pondering-the-future-of-an-it-career-part-1/dd684be9-0085-4324-976b-c5b5d0f30b26Sat, 10 Aug 2013 21:56:00 GMTFor a while now I’ve been trying to think of how best to shape the future of my career. Over the past 7+ years I’ve worked in many environments on various roles and responsibilities. However, I now feel that it’s time for a change. After working in my current role for over 5 years, I’m at a stage where I feel that there is not much that is new for me. No new challenges …etc. This has lead me to start thinking about what the next step should be and I’m hoping that through this post I’ll be able to address this.

THE PAST

I’ve been interested in Information Technology and programming since my time in high school and since then I wanted to have a career in IT. This led to me choosing to complete a degree in Information Systems & Computer Science. Since then, I’ve worked in many different environments and this has given me a broad range of experience that I believe will serve me very well in the future. However, looking ahead, I think this same experience has put some hurdles for me which I need to overcome in order to take my career in the direction I would like for the future. The reason for this is simply due to the fact that in many ways the future depends on the past for one’s career – that being the experience one has gained.

THE PRESENT

If today I look back at my career so far, I think the majority of my experience can be summed up in the following categories:maintenance, enhancements, admin and support plus a little more here and there. A large portion of this was on legacy systems and slightly older technologies than what is mainstream today. Despite that, I’ve always had a keen interest on what’s opposite to that. You can see some of this from many posts that I’ve published on this blog.

The above is basically the key issue here where if the future builds on the past then the options for the future are somewhat limited as a result.

THE FUTURE

The reason I mention this is that I’ve always been very passionate about new technologies and the possibilities they present – especially in the Microsoft domain. I have found that at work, the best times are those when presented with a problem or a task and resolving that requires doing some R&D on new possibilities for improving or fulfilling the function. Whenever an opportunity presents itself to enhance things or improve processes …etc., they’re the kind of tasks I enjoy the most.

HOW TO GET THERE?

Given all the above, the remaining thing to tackle is how to change the current situation. Many might say that in the real world there might not be a perfect place where ‘the future’ I mentioned is always possible. However, there are likely many places out there that offer such opportunities. The issue is just how to close the gap – which is the result of the past – and move into a future where a different past is required.

I have some ideas myself, including perhaps revisiting doing some Microsoft Certifications or other areas of study. Despite that, I wonder if that is enough to bridge this gap in the competitive market that we have? Especially given that certifications face credibility issues. If not, then other than continuing to ‘live in the past’ which is perfectly possible – yet not what I’d like to keep doing in the future – what other options are there? The only other option I can think of is finding a place where new technologies and best practices are used and be given an opportunity there to bridge the gap by building on a more generic and wide ranged skill set. That’s a possibility,but how likely is this to happen?

I would be interested to know what you think.

]]>In the previous posts in the series Exposing your data using .NET WCF Data Services we’ve covered everything required from building a data driven application from scratch to making the application ready to be fully running in the cloud. The final step in this process is to publish/host]]>https://www.wissa.net/exposing-your-data-using-net-wcf-data-services-part-4/a3551d96-69f6-411e-b49e-4e5ca5a5930aTue, 14 May 2013 22:02:27 GMTIn the previous posts in the series Exposing your data using .NET WCF Data Services we’ve covered everything required from building a data driven application from scratch to making the application ready to be fully running in the cloud. The final step in this process is to publish/host the application on Windows Azure. Below are the steps required to do so.

Creating the Windows Azure cloud service

The first thing that needs to be done is that we create the service to host our application. To do this I’ve chosen the Cloud Service option and specified the required parameters as shown below.

Once the service has been successfully created the Windows Azure Portal dashboard will show the service entry as can be seen here.

When we drill into the details of that service we will see two deployment options, one into a staging environment and one for production. We can deploy the application to either of these environments but for now we’ll use the staging option as shown below.

Once we drill into this option, the following dialog will be presented which allows us to specify the packages to upload. At this stage, we do not have any files that can be uploaded so we need to first create these before uploading.

Creating and uploading the Windows Azure packages

From within visual studio we can use the IDE to create the packages we need so that we can manually upload them using the portal screen shown earlier or we can publish directly from the IDE. For the moment, we will use the portal to upload the files. To do this we need to package our solution into the required format.

1) Configure the role properties

The first thing to do before creating and uploading the packages is to configure the role properties for the instance we are uploading. From here you can configure things like the instance count and VM size. I’ve currently set these to a single instance and to use the smallest VM size as shown below.

The standard for Windows Azure is to use 2 instances – this will be apparent further in the post – but for now we’ll just use a single instance.

2) Build the packages

To build the packages, all that needs to be done is to select the package option by right-clicking the cloud service project and then choose the desired build configuration as shown below.

Once the above steps have been completed, the required Windows Azure files will be generated in the folder below – the folder will open once the build is successful.

2) Upload the packages

With our packages now ready, we can go back to the portal and select the generated files for upload as shown below. Note that the option Deploy even if one or more roles contain a single instanceis selected here. If we do not select this option the Windows Azure portal will prevent the deployment from uploading. The reason for this is that in the role configuration that was shown earlier, we chose to only create a single instance of the role. We also selected the Start deployment option so that once the upload is complete the deployment to the staging environment is accessible straight away.

3) Deployment being prepared

After we complete the upload process, the instance we uploaded will start initialising and will go through several stages, below are screenshots of what you will see while the deployment is being made ready.

4) Deployment ready

Once the deployment is complete, the dashboard will show the status of the instance as Running.

Now, given that the service is running in the staging environment, we can drill down into the details of the instance and obtain the URL to access the service as well as other properties as shown below.

Accessing the service and fixing deployment issues

With all the upload steps now completed, we can use the URL of the staging instance to access the service as shown below. However, as you can see, our initial publish has failed due to the reason mentioned below. We are able to see the error here as I’ve already set the customErrors mode to Off. This issue is due to the explicit version number reference for the Microsoft.Data.Services dll which can be seen below in the* QuotesDataService.svc *file.

1) Microsoft.Data.Services dll mismatch error

2) Fixing the Microsoft.Data.Services mismatch error

To fix the dll issue observed earlier, all that needed to be done was to remove the explicit version reference. After this has been done we are able to successfully view the service data after fully deploying to Windows Azure as shown below.

The first step in migrating our code is to grab and install the latest Windows Azure SDK as shown below. At the time of writing this post the latest version is version 2.0.

Creating the Windows Azure service and role

Once the SDK has been installed, we can start using Visual Studio to work with Azure. To get started, we need to first load the solution created in Part 1 of this post series. After the solution has been loaded, we can begin creating our Windows Azure components which will run the WCF Data Service that was created in Part 1. The first step is to create a Windows Azure Cloud Service as shown below.

After the step above is completed we will need to create a role that will contain our application. As can be seen below, there are multiple options for doing this. For the purpose of this post series, I’ve gone with the WCF Service role as this is populated with the least amount of files by default. Another option was to host the service in a web role. You will see below that for creating this role the Windows Azure Tools version 2.0 is selected. I also have an older version installed on my PC but 2.0 is chosen by default.

Completing the above steps results in two new projects being added to our solution as shown below.

Adding code to the Cloud Service Role

To begin the migration to the cloud service, we need to add our WCF Data Services code to the role that was just created. Here’s a list of things we need to do:

1) Add reference to Entity Framework

In the first post in the series we used Entity Framework to access the DB data. At this stage, our cloud service role does not have any Entity Framework references so we need to bring these in. The easiest way to do so, is using Nuget from the package manager console as shown below.

2) Copy the Entity Framework files and other relevant files created in the first project

Now that the EF references are included in the project, we can just copy the relevant project files for EF & also the WCF related files into the Cloud Service role project as shown below.

3) Add references to WCF Data Services DLLs

After including the EF & the WCF service project files required, if we try to build the application, the following error is presented. The reason for this issue is that the WCF cloud role is not initially created as a WCF Data service and therefore is missing some assembly references.

We can bring in the WCF Data Service references by using the following commands in the package manager console. Both commands can be executed in the same way as done for including Entity Framework as shown in step 1.

Running the first command should include both references, but in case not, then you can add the missing DLLs using the second command. However, when I build the Quotes Service Role project after adding the references, it still fails and I get the error below.

The reason for the error shown here is that this cloud role was created as a WCF role so it already had a reference to System.Data.Services.Client.dll which we do not need to use. To resolve this issue, we need to remove the redundant references, I will also remove the default svc files which got added when the role was created as shown below as this are not needed either.

As you can see, with the redundant files and references removed the project now builds successfully.

4) Copy the Cloud DB Connection String

The last outstanding thing to do now, is to copy across the Quotes DB connection string which was used in* Part 2 *of this series in order to allow our cloud service to connect to the cloud DB instance. I’ve done this by simply copying and pasting the connection string from the QuotesDataService project into the service role project as shown below.

After completing the above step the QuotesDataService project is no longer required and can also be removed from the solution.

5) Test the changes

The previous steps complete all the changes we needed to make to the code to make it ready for the cloud. We can now do a test run to ensure it still works the same way as it did in the previous post when we migrated the database. Below is a test I did and as you can see the service is successfully running in the Windows Azure Emulator. Note that in order to run the emulator, your Visual Studio solution must be opened with elevated privileges and also in order to connect to the cloud database successfully your IP needs to be enabled in the firewall (refer to the previous post on how to do this if you have not read that previously)

Summary

Following on from the posts Exposing your data using .NET WCF Data Services: Part 1, and Exposing your data using .NET WCF Data Services- Part 2 this post explained how we can migrate our WCF Data Services created code into code that’s cloud ready in order to enable us to publish that code to Windows Azure so that it can be available and used externally. The post covered Downloading the latest Windows Azure SDK, Creating a Windows Azure cloud service and *migrating existing code to a cloud service. *In the next post we will cover the final step of publishing the cloud service created in this post to a publicly accessible Windows Azure service.

]]>

The History

It’s amazing how fast time flies but as of today, I have been blogging for 8 years! Before I started blogging back in 2005 while in my last year at university, I remember talking to Paul Andrew, who was working at Microsoft New Zealand at the time

It’s amazing how fast time flies but as of today, I have been blogging for 8 years! Before I started blogging back in 2005 while in my last year at university, I remember talking to Paul Andrew, who was working at Microsoft New Zealand at the time when I was a Microsoft Student Partner. Back then, my title was ‘Microsoft Student representative’, which then was changed to *Microsoft Student Ambassador *and currently is called *Microsoft Student Partner. *It’s hard to believe that this was 10 years ago now for me!

Putting the student ambassador story aside, at that time, when I was involved in some stuff with Paul, I found out that he had a blog. I thought to myself, hey it would be nice to have one of these. I asked Paul how he got his blog and he mentioned that these are allocated to Microsoft employees. I still wanted one, so I thought why not start my own. I started thinking of domain names to use and ended up going for https://www.dan.net.nz at the time. I’ve since also acquired .au and a couple of years ago which is what I currently use as my main domain. In 2005 I thought it would be nice to call myself DanDotNet and that’s how I came up with the first domain name – dan.net was not available to use. A couple of months ago I decided to let that domain name lapse after using it for so long. Since I’ve been using it has grown on me so it felt like the right time to let https://www.dan.net.nz go. That’s basically how my blogging journey started and it’s good to see that 8 years on I’m still doing so.

Thoughts on blogging and blogging tips

Looking back at my blog posts over the past eight years, what I notice most is that in the very early days, the majority of my posts were very short in length. I suppose this is what you could refer to today as the equivalent of what we now know as tweets! However, as I started gaining experience, knowledge and an audience, my posts have also matured a lot in nature of content, value and style. In addition to that my blog now has much more exposure than it did in the old days and I hope it continues to grow.

When I started blogging, most of the people in this blog list had active blogs. However, a few have disappeared over the years. I think social media and sites like twitter have taken content away from blogs, but, there are also some who have decided to retire. This includes one of my favourite blogs along that time of XeroCEO Rod Drury who now occasionally blogs on the Xero Blog instead.

Like Rod, who these days probably needs 72 hours in a day, we all have times when we are busy and that takes us away from blogging, other times there may not be much to say or blog about. A couple of years, ago I started regularly reading Scott Hanselman’s blog – my current favourite blog to read – and he has some great tips for bloggers in his posts Your Blog is The Engine of Community and Your words are wasted. The key message for me from both of these posts is that when you blog, you own your words, you are in controland that your words contribute content for the community and this can be useful in many ways. *That’s why if you have a blog, you should blog more*. Reading Scott’s posts has definitely encouraged me to keep blogging and even make more effort in blogging well and not just rant!

A trip down memory lane

Now given my blog’s anniversary I thought I would share 8 of the most popular – viewed/clicked – posts I’ve had on this blog. Here’s the list in alphabetical order which I hope you enjoy.

]]>https://www.wissa.net/exposing-your-data-using-net-wcf-data-services-part-2/b02d6671-1d3a-41ab-9987-63c1aa044278Fri, 26 Apr 2013 23:21:04 GMTIn the previous post Exposing your data using .NET WCF Data Services: Part 1, I wrote about creating a WCF Data Services application to expose data from a SQL Database. In this post we’ll go through publishing the database created in Part 1 to the Windows Azure cloud.

Creating the Windows Azure database

The first step in migrating the Quotes data services application that we created in Part 1 is to move our database into the cloud. To get started with this the initial step is to create a new database instance in Azure, which we can do in the management portal by the following steps.

Navigate to the SQL Databases node

Once we are logged into the Azure portal, we can specify the database details for the database we wish to create via the Databases node as shown below. From there we can choose options like the database type, size and collation. The Business version of the databases offers larger storage capacity.

Manage database details from the SQL Databases dashboard

Once our database instance is created, we will be able to see the database details in the dashboard of the portal and are able to start managing/configuring the database details as shown below. The dashboard will allow you to see server events, errors, amount of storage used and so on.

Attempt to connect to the database

With the database instance now created we are able to view the server details on the dashboard. From there we will be able to view the URL which we can use to connect to the server using the credentials created in the previous step.

However, when I attempt to establish a connection using SQL Server Management Studio, I’m presented with the error below. The reason for this error is that for being able to connect to a SQL Azure database the IP Address for the client computer – or an IP Address range – must be allowed in the database firewall rules.

To correct this issue we need to allow the IP address reported in the error above – masked– into the SQL Azure firewall. This can be done by accessing the Manage allowed IP addressessetting and then adding the desired IP address or ranges as shown below.

Establish a connection to the Azure database from SSMS

Once the IP addresses have been allowed in the SQL Azure firewall we are now able to connect to the database from management studio as can be seen here.

Migrate our database to SQL Azure

With our connection now established, the main remaining task is to migrate our data out to the Azure database. One way to do this in the past was by using the SQL Azure migration Wizard. However, given we are using SQL 2012 in this instance there is a way which I consider simpler. You may be aware of the SQL Server Management Studio Generate Scripts utility. Now this same utility can be using to export scripts to a format that is suitable to run in Azure. The article How to: Migrate a Database by Using the Generate Scripts Wizard (Windows Azure SQL Database) explains the options that need to be chosen in SSMS to do this. Below is a quick summary:

Set Output Type as Save script to a specific location. Select Save to file. Click Single file. Type the file name and location in File name. Click Advanced.

In Advanced Scripting Options set “Script for the database engine type” option as “SQL Database”, set “Convert UDDTs to Base Types” option as “True”, and set “Types of data to script” option as “Schema and data”. Click OK.

Once these options are set, the script is generated as can be seen below.

Execute the generated script on the SQL Azure database

Now that the script is ready we can execute it on our target database and once this is successful, our tables will be created in the target instance as shown below.

Connect our local application instance to the Azure database

With our data now in the cloud, the final step is to connect our local application instance to the cloud database. This can be achieved by simply updating the connection string in our project to point to the SQL Azure instance as shown below.

After the above step is completed we should be able to view our data in the same way as we did in Part 1. By running the application locally we can see that our data displays as expected as shown here.

Summary

The above steps in this post show how we can easily create a SQL Azure database instance using the management portal and migrate our local database and application to use that instance. In the next post we will go through the final step of moving the application’s code to Windows Azure.

]]>Last year I wrote the post Data as a Service: The next big thing? where I mentioned that in a *Devices + Services era, *one issue we face is that in many cases it is not possible to access useful data to build consumer applications over. I summed up that post]]>https://www.wissa.net/exposing-your-data-using-net-wcf-data-services-part-1/f2fd7e32-8cb8-4ce0-b763-48e91f4c6032Sat, 13 Apr 2013 12:03:38 GMTLast year I wrote the post Data as a Service: The next big thing? where I mentioned that in a *Devices + Services era, *one issue we face is that in many cases it is not possible to access useful data to build consumer applications over. I summed up that post by saying the following:

As a result I really think that for the Devices + Services to succeed we need more data to consume… if you have data that can be shared to make other services possible, simpler or better then share it! It’s time to make DaaS the next big thing along side Devices + Services.

With that in mind, the question is: Is there a feasible way for companies/organisations with data available to share to expose their data? I’ve been wondering about that myself recently. I had heard of OData but never really looked much into it and although I’m still a newbie to OData it turns out that OData provides a simple, uniform way of publishing data. In .NET this can be done using WCF Data Services and below I provide a walkthrough of publishing a small database using WCF Data Services and consuming that in an application.

Creating a Simple Database

For this post I decided to build a simple quotes database for the walkthrough. I’ve built a small database of quotes by putting together quotes I’ve found on the internet from people like Bill Gates, Steve Jobs and others.

As can be seen above, the table contains three basic columns to hold the quote ID, Author and the Quote text. The next thing that needs to be done is creating a way to use that data which we can do using WCF Data Services.

Creating a WCF Data Service

You can host a WCF Data service in a web application so given that for this post all we need is the service I have just added it to a blank ASP.NET Web app by using the following steps

1) Create an Empty ASP.NET Web Application in Visual Studio

Launch Visual Studio and add a new ASP.NET Empty Web application.

2) Add your Data Model to the project created in the previous step

The next thing that needs to be done is adding our Data Model so that we can use that in our service. One way to do this is by adding an ADO.NET Entity Data Model as shown in the screenshots below.

For this post we’ll use the Quotes database that was created earlier by choosing theGenerate from database option.

Once we have established a connection to our database we can then choose the desired tables and when completed our model will be added to the solution as seen below.

3) Assigning our Data Model to a Service

With our Data Model ready we can now create the service to consume this model with. This can be done by adding a WCF Data Service file to the project. In the screenshots below I create a* QuotesDataService.svc *file.

Once this file is created the code below will be presented.

Now what we need to do is assign our Data Model entities to the DataService and also set up the access rules to the service. Below is what I’ve done for the Quotes service.

As you can see above, I’ve passed the QuotesDBEntities model – the data source class name – to the DataService and set read access to the operations and data.

Accessing and Querying the Data Service

The previous steps complete everything required for us to be able to access the data and now we are ready to start querying the service.

1) Browse to the service URL

Given we are currently still running locally all we need to do is just run the Visual Studio Project with the QuotesDataService.svc page set as the start page. When this is done we are presented with the following page showing us the available entities in this service that we can query.

2) Perform some queries on the Data

With our service published we can now start querying and below are some example queries I have done.

Show all available quotes

Show a quote with a specific ID

Show quotes by a specific Author

The above queries demonstrate a few ways OData can be used to query the data published in our service. You may have observed that in this instance the data is displaying in an ATOM feed format. WCF Data Service also enables you to publish that data using JSON. One way to do this is by changing the response headers to accept the JSON format which you can do through your code/fiddler. Another way is to use the $formatattribute. Unfortunately this is not supported in WCF Data Services out of the box. The good news is, there’s a way to add that support!

From the files extracted in the previous step, the next action required is to add this file to your project so that the extension can be used. I’ve done this by just copying and including the file in the solution as shown below.

3) Assign the [JSONPSupportBehavior] attribute

By adding the class in the previous step we are now able to annotate our service class with this property so that we can publish our data in JSON format. To do so all that needs to be done is to bring in the required namespace and assign the attribute to the class as shown below.

This completes the required steps and we can now access our service data using JSON format.

4) Viewing the service data using JSON

With the addition of the code mentioned in the previous step completed, we can now use the $format option to view the data using JSON as shown below. Almost!

As you can see, when I try this and pass the $format=json in my URL I get the response above instead of the actual JSON. The reason for this is that I’m using the latest version of WCF Data Services. The changes we made earlier to support the format query will only work directly if you’re using WCF Data Services older than version 5. However, we can work around this issue by either adding the odata=verbose option or setting the MaxDataServiceVersion header to 2. To fix this issue in my case, I went with the former option by updating the JSONSupportBehaviour extension code to add the verbose option to the AfterReceiveRequestmethod as shown below.

With the above steps completed we can finally request our service data in JSON format as shown below.

Summary

The steps above in this post demonstrated how we can use WCF Data Services to easily publish a data model so that its data can be consumed by other applications through OData. This enabled us to expose the data using both the ATOM and JSON formats. In the next post I will continue on from this post by publishing this data into the cloud using Windows Azure so that it can then be consumed by client applications such a Windows 8 or Windows Phone 8 app.

]]>Over the past few years, I’ve blogged a number of times on Microsoft’s search offering from the early days of Windows Live Search through to it’s current Bing services. Despite my interest in Windows Live Search and Bing over that period, I still find myself using Google]]>https://www.wissa.net/why-bing-when-you-can-google/82995024-9500-45b5-9942-104e70920f41Wed, 03 Apr 2013 09:43:01 GMTOver the past few years, I’ve blogged a number of times on Microsoft’s search offering from the early days of Windows Live Search through to it’s current Bing services. Despite my interest in Windows Live Search and Bing over that period, I still find myself using Google a lot of the time and I decided to try and find out why. One reason for me, which I’ve mentioned in my post From Windows Live Search to Bing in 6 years is the home page. Every time I visit the Bing homepage I find it distracting when I go there I go there to search. I know that the search box is there and I can easily move away from the home page by starting to type in it straight away. However, why is this page so busy with information, pictures and colours that I don’t care about? It would be nice if one can just customise that page to their own liking.

With that aside, I believe that the main reason that I use Google over Bing is that Bing is a latecomer and I’m already used to using Google. So, unless there’s an advantage that Bing offers over Google then put simply, what’s the point?

Lack of Innovation

In recent years Bing has introduced many new features that help in producing good search results. The posts I linked to earlier above cover some of these. However, both search engines still lack significant innovation and have not provided any improvement in the way we search for many years. It’s still just a search box. Here’s an example of things that both engines do which are simply pointless!

Google

Bing

Both sets of pictures above between Google and Bing show features that most of us would not care about. Why would one care how long a search has taken or how many results have come back? When we’re searching we’re looking for information so what we care about is whether or not we got those. If a search is too slow, we will know, such as if a page takes to long to respond or doesn’t load results. The fact that we got 5 billion plus results back in 0.17 seconds doesn’t provide us with any value whatsoever.

The next thing is the results pages, rather than both search engines returning us tens, hundreds or even thousands of pages back, who views results in pages 5 or beyond for instance? Why don’t search engines try and allow us to refine our queries rather than let us scroll through pages and pages of results till we find what we’re looking for. So to both Google and Bing, I say, please remove all the page results and try and understand me and my query better so that you give me what I am looking for. Accurately.

User Understanding

The last paragraph above brings me to an interesting point. One reason I prefer Google over Bing in my day to day usage – and as a result I use Google more – is that for a lot of the queries that I do it would seem that Google knows me better and has a better understanding of what I am looking for. I think this is an area where Bing – the so called decision engine – needs to improve a lot. Below are some examples of where Google does better in that department in my view.

Finding out the current time

One query I perform often is searching for the current time in a particular location, Google has the smarts to display the result to me as the top returned item instead of requiring me to click on a link so that I find out the answer. Bing on the other hand doesn’t let me do that. Below are the results in Bing and Google and you can see that Google does better there.

Bing

Google

As you can see above, doing the time search in Google has saved me time by displaying the actual time within the results page. In Bing, I would have to navigate to the www.timeanddatewebsite.com to find out the current time and this is a much slower process.

Recent sports results

There are many other areas where Google does much better than Bing in returning relevant results. One of them is recent sports results. I’m a tennis fan and regularly check for tournament results and once again Google makes that very simple but knowing which tournaments are currently on/recent and displays the results within the results page. Here’s some comparison between both engines.

Bing

Google

Once again, as can be seen above, the Google search engine has had better understanding of my query and displayed to me all the recent tournament results for Andy Murray, which is exactly what I was looking for in this query. Bing unfortunately did not do the same. These are just two examples of things that Google does better than Bing and are among the reasons I use Google more often.

With these differences there might seem that there’s no reason to use Bing, and for me, that’s mostly the case. On the other hand, there are still some areas where Bing does really well.

Going back to the original question of this post, unless you are searching for very specific things that Bing is known to do better than Google, there really is no reason at the present time to Bing given you can Google. Google is still better at understanding users – at least in my experience – and as a result I don’t think there’s enough reason for me to make the switch despite all my interest in Bing over the past few years.

Last words, Bing: you need to show us something new. You need to understand us better. Until then, it’s back to googling.