Recent Comments

@Frank - yup totally agree. I don't believe it really makes sense to store binary resources, but in this case I have to support it since that's what Resx supports. You can also use this stuff outside of ASP.NET where you might need to have some resource access.

We recently migrated a medium-ish sized system from two dedicated hosted servers at RackSpace over to Azure. We run a data api that serves 3k/6k (off-peak/peak) requests per minute - about 7M requests a day total to about 70,000 unique clients, some international.

We're having a lot of sporadic issues. In fact, as I write this one of the websites in our group has been returning 503's for 25 minutes for no discernible reason.

We're using about dozen D-series cores of cloud service for our main api app. It does a fair bit of image generation using System.Drawing, GDI+, 3rd party native libraries, etc. An A2 cloud service for recurring jobs and the Scheduler feeding to a storage queue to initiate them. An A2 for a few data ingestion apps and FTP, mounts a storage file share that's shared with the recurring job processor and api. Another A2 cloud service for periodic video encoding and image generation. A handful of websites in one pool - our public storefront and a site just running ImageResizer off our blob storage along with some internal tool sites. 13GB Redis cache.

Our main DB is currently a P3 because Azure has some bug where our database was failing over 10+ times a day and our apps would be unable to connect to our DB for 1-2 full minutes at a time, several times a day. We also use a P1 master and P2 active readable secondary DB.

I can't even begin to enumerate all the many little weird issues we experience, but the end result is that we've barely had a single day go by without all our klaxons blazing at least once. Service downtime is not an exceptional event, it's a matter of course.

There's also no effective way to get questions about these events answered, short of perhaps paying the $1000 a month support plan. Currently we submit tickets and get non-answers after 5-7 days.

@Tibi - this was asked before and the behavior is by design. When a property changes you get passed the list of properties with their state so you can decide on what you need to address. There seems to be no need to raise multiple callbacks for each change because you're going to get exactly the same data with each of them.

If you want to handle multiple callbacks, go through the list of props and determine what needs to be done based on the values.

@KA - The core library can be used completely outside of the ASP.NET context, so if you have a Windows (non Web) app you can use database resources in that as well using the ResourceManager or DbRes.

When running under ASP.NET using the Web package you get two different project options to run under: WebForms or Project where WebForms uses Local/Global Resource folders and naming conventions or Project which uses simple files and that can be used with any .NET project and application.

All the front end stuff in the localization UI depends on the Web package and it assumes that ASP.NET is available, so if you run the Web Admin form HttpContext is always there. All the tooling that is used however is available in classes that you can call directly from your own applications/code.

For a WCF project you would just use the core library that has no dependencies on HttpContext - there are only two support classes in the core library that rely on System.Web - DbRes (which has helpers that return HtmlString) and the various exporters that default paths to the default web root path *if* Http context is available.

if you find other places in the core library, please file an issue on GitHub and I'll take a look at it.

Really cool stuff, I am impressed.
I have one question. Saw that for retrieving the resources you use HttpContext.GetBlobal or local resources. Is there any other way to retrieve the resources. For example if you use a WCF Service and is on TCP the HttpContext is null.
I mean somehow to be protocol independent.

For those not familiar, this uses the debug flag set up against the project configuration in Build tab. 'Define Debug constant'. You can configure whether the flag is set or not for each project build profile.

I am using the Expando object to facilitate an adhoc data query in my application where the returned data is a result of a sql query built during run time. Not tied to strong typed is so very VFP like.

Excellent and a must read article. I share the philosophy that understanding the inner workings of things may contribute greatly for further creations - besides the fact the seeing the big picture gives lots of satisfaction.

It seems that extra settings in the publishing profile are not taken into account on individual publishing.

In my publishing profile I have added some custom settings to minify CSS and JS files. When I publish the entire project, everything works as expected, but if I want to publish just the Scripts folder, the JS files don't get minified anymore.

Also, I have set up the profile to precompile source files, so almost everything ends up in the bin folder. However, there's no way to publish only this folder.

Greate job! thank you!
but I have 1 trouble.
in my local machine Fiddler creates certificate and FiddlerCore Api works perfectly.

but I need to build my project also on CI TeamCity and there I have some problem.
FiddlerCore could not create certificate.
my code fails on this line:
"CertMaker.createRootCert()"
with next error message:
"System.IO.FileNotFoundException : Cannot locate: MakeCert.exe. Please move makecert.exe to the Fiddler installation directory.
at Fiddler.DefaultCertificateProvider.CreateCert(String sHostname, Boolean isRoot)
at Fiddler.DefaultCertificateProvider.CreateRootCertificate()".

In my project I have 2 dlls: BCMakeCert.dll and CertMaker.dll;

and here is my method:
"void InstallCertificate()
{
if (!CertMaker.rootCertExists())
{
if (!CertMaker.createRootCert())
{
throw new Exception("Unable to create certificate for FiddlerCore.");
}
if (!CertMaker.trustRootCert())
{
throw new Exception("Unable to trust certificate for FiddlerCore.");
}

just encode with javascript ?! - amounts to writing a massive encoder/decoder that will be broken each time W3 changes things, just about. Just try this ... examples:
xmlhttp.open("POST", "page.aspx?val1=abc&val2=<d", false(or true));
or anything "<(letter)"...
or anything that has '&' in it...
The above is with RequestValidation in force.
Examples: if you encode '<d' ... no go!
if you swap '<' or & with certain 3 character sequences and of course, decoding code galore on the server side... wow... things can be made to work... but the code is horrendous. Don't know, show me what I am missing.
so, hypothetically <div would become xxx, <p...zzz, ... etc... ...xxx may fail in a porno context! lol

Hi Guys,
I'm running a windows service to render a HTML content on WebBrowserCotrol and take the screenShot of the output and save it in the specified local directory. I'm able to achieve this if i'm logged in the server, if I'm logged out of the server I'm getting the script error. I think it still try to find the registry entry in the HKEY_CURRENT_USER

Thanks Dave. If you've already been doing Cordova development there's probably not much new stuff here, except maybe the focus on Visual Studio, which I was rather impressed with in how easy it makes the development process even with iOS.

Nice. I just pulled my first print copy of CODE magazine out of the mailbox this afternoon and thought that would be an interesting article to read since I've been doing a lot of Cordova work lately myself. Didn't realize you wrote it. I will definitely find some time to read it now.

Currently the answer is no. vNext is very slow compared to say Web API or current MVC, but it's also not optimized yet. In my tests I ran in with beta 2 perf was somewhere in the 75% range from what MVC/Web API produced.

I haven't added vNext to the tests because of the unstable and changing environment and the really crappy perf at the moment. This will get better once they get closer to a fixed set of features and release candidates I think.

To be honest I don't expect performance to be greatly improved over current tech, but scalability - higher overall request load might be the case if code is properly async optimized.

I agree with Andy, would vNext be faster ? ( because they have merged Web api and Mvc, and is more lighter because it doesn't reference System.Web and the heavy 200mb framework ? ( lighter means small memory print to handle each request... ) )

As much as I like doing c# and .net tech, .NET 4.5 is causing major headaches in our web application projects, mainly asp.net mvc.

Some of our backend libs still are 4.0 since they're mainly used in WinForms applications where we don't want to force the clients to do a new download.

But as good as it is, Visual Studio 2013 is a pain, when using asp.net MVC and you do NOT want to use 4.5, especially if you're using nuget for all those nifty packages like bootstrap, jquery, log4net etc.

I sometimes don't understand why the VS and .NET teams don't have the Developers in mind, who are actually using their stuff. I would like to keep costs low, but I'm wasting my companies money on such issues.

I've always referred to this article over the years in my various discussions of performance :) Any chance you might be able to revisit it for the new 2015 technologies (MS has put a lot of effort into performance since 2012)

Not sure why you can't click the checkboxes. Does it work if you take the with-font off? Does the sample form work for you? I can't see a reason that it work with the keyboard but not the mouse. The mouse target should work for both the label and the actual check box. I use the exact code you have in an actual application with out the angular bindings without problems. You might want to double check the actual HTML that is rendered with dev tools and Inspect Element to ensure that there isn't something that's getting injected into the middle.

@Pawel - it works down to IE 9 which is the first IE version that partially supports CSS3 which is what makes this work.

You really don't want to use Azure's SQL databases... They've put a lot of effort into these over the past year to make them perform dreadfully slow. A lot of this I wouldn't have a problem with, as much of it is to throttle bad database design and bad queries. Except they don't have good tooling to help you identify your problems. You can't run the SQL performance analyzer. And worse, you can't easily get a copy of your database and bring it down to your local machine to do this kind of analysis. (even if it was comparable)

Hosting your own database on a VM is a preferable option. Your crappy A2 vm experience is easily equivalent to the $900/month P2 level of Azure SQL.

I went to settings and then workspace. I had some old workspaces referenced and once I cleared those out and restarted the browser the issues cleared up for me. Seems like it might be related to the source mapping features, as @Pilotbob pointed out.

@Chris - Not sure. I think that should work, but you'll have to make sure the handlers are registered in the <system.web> section instead of <system.webServer>. I think you may have the configuration backwards? the httpHandler section is classic mode, handler is integrated.

I'm with you on the datetime offset/datetime issue. On the server I only want to deal with UTC dates.
But I wonder why you don't use the javascript method getTimezoneOffset() of a Date on the client to get the timezone of the user ?

Hey Rick, I really like your work. I'm trying to implement the javascriptresourcehandler using classic mode. But this doesn't work. The requests are answered with a PlatformNotSupportedException, telling me that the integrated pipelinemode has to be used. I followed the implementation-guide in your post, but I was only able to make it work using integrated mode and adding the handler to the handler-section (not the httphandler-section). Is the classic mode not supported anymore?

We would love to use SQL Database (aka SQL Azure) but the feature set isn't there and running a VM with SQL on it is overly expensive for our shop. Having all of our data and infrastructure in Azure would be way easier to manage and work with but we ended up having on premise environment. We tested performance of our VM versus Azure's VM for a similar setup and saw a similar pattern to what you saw with your older physical machine.

Bottom line is that it if you can't move to Azure Websites and SQL Database in Azure than it's not worth moving. The cost/performance of VM's in Azure for smaller companies/individuals is holding quite a bit of migration to the cloud back. At least in the Microsoft world. Bizspark is decent but when the cost is so high, it's hard to justify that kind of money.

In this case, Output is not being evaluated in the debugger window when I hover over a "this.Output" it in the code. When I hover over "this", I get the window for that and drilling down to Output, I get the "Could not evaluate expression" error with a refresh icon. When I refresh, it tells me that it cannot convert LoginTest to TestBase<LoginOutput>.

If I add in a plain-and-simple backing field, the backing field gets evaluated fine.

@Andreas - Resource Managers and Providers are case sensitive so you have to match key names explicitly, since they are based on dictionary lookups. If you turn the provider off you're getting the default values you have in controls I'm guessing but it's not actually using any resources at all. Even Resx resources names are case sensitive.

In the database the key case sensitivity depends on the database character set settings. If you're using Sql server you can use a case insensitive character set which is the default.

The keynames seems case sensetive. Some strings in our application only show the keynames. And when we turn of the provider it all works. If we change the resource key name to match what is on the webpage it works.

@James - I think I understand what DateTimeOffset does - it stores the DT *and* a timezone offset of when the data is captured for that particular tz. Makes sense. But for Web apps you NEVER capture time that way. You want to capture the date in UTC and the offset is irrelevant because it represents the server's time not the client's time.

Even if you DID store the offset from the original users timezone (which means you'd have to do the conversion up front because the server's not running that users timezone) you still get *only that timezone*. Not the timezone that a user of the date might want to see at a later time.

I guess I don't see how DTO helps if the time value's offset is fixed to a specific timezone when the application always adjusts to the user's timezone preference which mostly will not be the original DTO offset. You STILL have to do these conversions for EVERY user and if I do that then what does DTO buy me? Nothing except more storage required.

I also don't agree with this:

> The *only* time you need to do any conversion is strictly when displaying

because if you do certain datetime operations like Date queries that group by day or month or less granual time increment, that requires that you adjust those queries for that timezone. Otherwise you're going to include the wrong time range. So there are a number of places where this matters - almost every date query with user input in particular since those are typically done based on day ranges.

@Rick - WRT "Since DTO only supports a single timezone - I first have to convert to that timezone to save" and "have to get the date into the right TimeZone for saving first" - those are not correct (and very unfortunate that those were the reasons you've avoided using it).

The whole point is that since the data type stores the offset with the datetime, then the same column can store any timezone. You don't have to do *any* timezone conversions at any point if you don't want. You get a DateTimeOffset from someone in Hawaii that's got a UTC offset of -10 and you can store it as-is and still happily allow Oregon people to store in -8/-7, EST people to store -5/-4, etc. If SQL Server forced developers/users to convert to a particular timezone for saving the datatype would have no benefit over just saving as datetime2 and telling people to store as UTC (certainly the best practice if you're stuck using datetime/datetime2).

The *only* time you need to do any conversion is strictly when displaying, and only if you want to display it in a different timezone than what it already is (for instance, displaying it in the user's timezone regardless of what the originating timezone was). You can do comparisons, queries, etc all without having to do any conversions.

In your enter-in-Hawaii/display-in-Oregon scenario, with datetime or datetime2, you would typically convert twice, once on the write path to convert to UTC for storage (since your storage has no support to encode the UTC offset, so you would either encode it as a separate column, or more likely, store the version with no offset), then again on the read path to convert the UTC to Oregon.

If you use datetimeoffset instead, you don't need to convert on your write path at all. It comes in as offset -10 from your Hawaii person, -8 half the time from your Oregon people, -7 the rest of the time, etc. and you just store it like that. On your read path, you can display it with the original offset if you want (something that's not really an option if you forced conversion to UTC on the write path), or convert it to local time (just like you would in the datetime/datetime2 scenario)

***
When should you use datetimeoffset instead of datetime? The answer is: you should almost always use datetimeoffset. I’ll make the claim that there is only a single case where datetime is clearly the best data type for the job, and that’s when you actually require an ambiguous time
***

@Mark - yeah I'm using my MSDN account to experiment with this stuff. After all that is what it's supposed to be for. I sure hope they're not throttling MSDN accounts - if they are it's a great way to ensure people won't use Azure because the performance is so terrible :-)

It's interesting to hear responses here that mostly seem to concur on the abysmal performance, but a few here and there seem to suggest that performance is just fine.

Just to clarify when I re-installed new VMs more recently I've had better luck with performance and at least the RDP performance is 'usable'. It's better, but overall with load tests even these newer installs have been very slow compared to other providers I've tried with smaller (and much cheaper) server configurations.

I noticed similar results especially the extra slow response when doing anything through RDP.

I did think it was the server set up. However I did try Azure previously using a standard try it for 30 days offer. When I did this response was good definitely no RDP lag and also comparable to my current Dedicated Server hires.

I do have a theory though - I like you have credit through Visual Studio and this time I am using this. Surely Microsoft wouldn't throttle this back as we are effectively getting free credit would they ???

Azure was in my migration plan to get away from leasing Physical Servers but not so sure now - Maybe should try again with a public (not free) account!