The above code will function in .NET Framework, but in .NET Core 2.0, an exception occurs:

‘System.__ComObject’ does not contain a definition for ‘Visible’

If you look in Task Manager, you’ll see that Excel.exe has indeed started, but the object members cannot be access directly without IDispatch. To work around this, you can use interop techniques that pre-date the dynamic keyword.

Interop assemblies are wrappers that enable .NET Core to interact with COM objects using early binding. Microsoft provides interop assemblies for Office automation on NuGet and elsewhere. Once the assemblies are installed, this code will work:

What about your own COM objects? .NET Core projects do not provide a means to reference them directly. However, you can create a .NET Framework 4.x project and add a reference to the COM object. This will create an Interop.MyCOMObject.dll assembly in the obj/Debug folder, which you can then reference directly in the .NET Core project.

A COM object may return other objects that are not part of the type library, and early binding is not an option. This happens often in my Visual FoxPro interop code. You can use reflection to access the object members.

If you’re using in-process (DLL) COM servers, be aware of 32-bit vs 64-bit issues. For web applications, take a look a Rick Strahl’s post on STA components. I don’t know if these techniques are available in .NET Core. In my experience, these issues do not apply with out-of-process (EXE) COM servers, but your mileage may vary.

Lastly, keep in mind that ASP.NET Core 2.0 continues to run on .NET Framework 4.x, in addition to .NET Core. If you’re already restricted to Windows, there aren’t many reasons to prefer .NET Core over the full framework (yet), so it remains the best option for COM Interop. That said, it’s good to know these possibilities exist with .NET Core.

Microsoft has been choosy about what they bring over to .NET Core. Over time, they have been finding their way towards more parity with .NET Framework. It was recently announced that WinForms/WPF will be coming to .NET Core 3.0. They may find that a lot of existing code relies on late binding. I would not be surprised if IDispatch makes a comeback.

Nearly a decade ago, I started this blog on the Foxite.com Community Weblog, a free service made available to the FoxPro community by Eric den Doop. I wanted more control on the admin side — especially for dealing with spam – so I have moved the blog to my own site: joelleach.net.

All posts/comments have been retained, and links to the old site should automatically redirect to the new site. Please update your RSS readers to point to the new feed.

Special thanks to Eric for making this service available to the community and for hosting my blog all these years.

TypeScript brings full class-based inheritance to JavaScript projects. This works very well, but you may run into some unexpected behavior when setting properties in a subclass and using those properties in the parent constructor.

Here is a simple example that demonstrates the problem:

MyBaseClass is defined as an abstract class. Like other OOP languages, this means that the class cannot be instantiated directly.

MySubClass is a concrete subclass of MyBaseClass that can be instantiated.

New in TypeScript 2.0 is the ability to define properties as abstract, which I have done with string1 and string2. These properties must be set in the subclass, or the transpiler will generate an error.

The parent class constructor sets the string3 property based on the values of string1 and string2 set in the subclass. Imagine that string3 is a property that will be used by other methods in the base class (not shown in the example code), so it is a valid design choice to set that property in the constructor.

Finally, the last two lines of code instantiate the class and display string3.

Of course, I expected this code to display “Hello World!”, but in fact it displays “undefined undefined”. Why is that? A look at the transpiled Javascript of the subclass constructor will give us a clue.

Follow the link for more details on the reasons, but it comes down to the fact that property initialization is inextricably intertwined with the constructor. Alternative approaches have been suggested, but besides breaking existing code, the order above is likely to become part of the EcmaScript (JavaScript) standard.

As an OOP veteran of other languages, I find this behavior unfortunate. By defining a class as abstract, you are in effect saying it is “incomplete“, and it will be completed by its concrete subclasses. These technical restrictions on property initialization and constructors get in the way, but there are things we can do to work around the problem.

Constructor Parameters

Rather than setting properties in the subclass, you can pass values to the base class constructor.

This works, but stylistically, I don’t like it for an inheritance-based approach. I’d rather have the ability to simply set properties in the subclass, but call it personal preference. There is nothing wrong with this solution.

Constructor Hook Method

Here, I’ve added an initialize() hook method to the constructor that runs before the buildString3() method. This gives the subclass an opportunity to set properties the base class needs at the appropriate time. I’ve declared the initalize() method as abstract, so that it must be implemented in the subclass.

This also works, but it leaves much to be desired. Even though I have declared the initialize() method as abstract, nothing forces the string1 and string2 properties to be set. Notice that I had to remove the abstract keyword from those properties for this to transpile without error. In general, I like the idea of adding hook methods for subclasses to use, but they should be optional. The base class should not depend on them, nor should it be ambiguous about which properties need to be set.

Getters/Setters

As you may have gathered from the above, methods do not suffer from the same constructor timing issues as properties. The base class constructor called into the subclass initialize() method, and it functioned as expected. Likewise, using getter/setter syntax for properties is an option:

This is closer to the original vision. Having to use getter syntax is a little wordy for my taste, when all you want to do is return a simple value. You may not mind if you are used to this from other languages.

Move the Code

Finally, my favorite solution is to move the code out of the constructor, which is where the timing issue is. I moved the code into the string3 property with getter syntax. It won’t run until the property is accessed after the object has been constructed, so the timing issue is avoided. I also added a private _string3 property for improved performance, but of course, that is optional.

The short story is that COM interop does not function in .NET Core 1.0. .NET Core is Microsoft’s open-source, cross-platform implementation of the core libraries in the .NET Framework. With its cross-platform focus, the exclusion of COM interop is intentional.

ASP.NET Core is Microsoft’s open-source web framework that sits on top of the base .NET framework. If you require COM interop and want to take advantage of new features in ASP.NET Core 1.0, you are in luck. Scott Hanselman reminds us that ASP.NET Core runs not only on .NET Core, but also on the full .NET Framework 4.6.

If you try to compile this code with .NET Core 1.0, it simply won’t work because Type.GetTypeFromProgID() is not available in the API.

That describes the current situation, but what about the future? A while back, there was actually talk about bringing pieces of COM to Mac and Linux. I think those plans have been scrapped or would only have limited use.

.NET Standard is an effort to bring a minimum common API set to all .NET platforms, present (Full Framework 4.6, .NET Core, Xamarin, Mono) and future. .NET Standard 2.0 will be implemented in .NET Core 1.1, and it brings in a lot of missing APIs from the full framework. (UPDATE: .NET Core 1.1 has been released since this was written. It includes many new APIs, but not full support for .NET Standard 2.0. That will show up in a future release of .NET Core.) It should make porting existing code to .NET Core much easier. One of the APIs slated for 2.0 (as of this writing) is Type.GetTypeFromProgID(). That means that COM interop will work on .NET Core 1.1, right? Wrong. Calling this method will throw a “Not Implemented” or “Platform Not Supported” error. As I was told by a .NET Foundation member:

There is often incorrect assumption made that “included in .NET Standard” == “works in .NET Core”. There are going to be some APIs that will throw PlaformNotSupportedException on .NET Core, also this set be different between Windows and Unix.

That’s a bit counter-intuitive to me. First of all, it seems like .NET Core should be the “reference implementation” of .NET Standard. Beyond that, the availability of APIs on unsupported platforms may lead a developer to believe an API is fully functional when it is not. To solve that issue, tooling is coming that will identify APIs that are not supported on certain platforms/runtimes (hopefully before a developer has gone through a porting effort). Also, keep in mind that a least-common-denominator approach has already been tried with Portable Class Libraries, and the .NET team is going for something better. The .NET Standard team is currently accepting feedback on GitHub, so feel free to post your thoughts and questions.

Looking beyond .NET Standard 2.0 and .NET Core 1.1, several COM/native interop pieces have already been segregated for a possible future extension. Also, the source code for the underlying functionality is readily available. I think it is only a matter of time before COM interop will be available for use with .NET Core.

A few of our clients were getting an obscure error when running reports. I couldn’t find any information on the error online, so I thought I would post a potential workaround here to help the next person that runs into it.

While the report is running and being prepared for preview, a dialog pops up prompting the user to find a table/DBF. When the user presses Cancel, the dialog may or may not pop up again multiple times. Eventually, the user receives this error:

File ‘m_4nl0nroh6.dbf’ does not exist.
The metadata for some report definition rows could not be loaded.
Some dynamic report features may be missing, or a report could not conclude successfully.

The error occurs in frxcursor.UnpackFrxMemberData() while attempting to ALTER TABLE on a cursor that was created only a few lines earlier. For some reason, the underlying DBF is missing, but I can’t explain why.

What I did find is that some of our reports contained generic MemberData in the STYLE field of the FRX that was not being used. Once removed, the code producing the error will not need to run. I suspect the data inadvertently got in there in the first place using the Report Designer. If you go to Field Properties->Other and click on the Run-time extensions->Edit Settings button, it will automatically add code to the STYLE field, which will be saved if you press OK.

NOTE: There are legitimate uses of MemberData in the STYLE field, such as rotated labels. In those cases, the STYLE field needs to remain populated, so be sure before deleting the data.

Over the years, there have been debates about whether it is best to use source control integrated with the VFP Project Manager or to keep it separate. I’ve always preferred to have the integrated experience. Regardless of which side you fall on, it is very useful to have textual representations of VFP’s binary source files (SCX, VCX, etc.). These text files enable diffs, so a developer can compare different versions of a source file and see what changes were made. VFP includes SCCText.prg in the box, which has improved over time, but leaves a lot to be desired. The SCCTextX project on VFPX is a major improvement and makes the resulting text files much more usable.

However, one thing I’ve always wanted was the ability to generate text files for DBCs and DBFs that we include in our project and source control. The DBCs contain valuable information about the data structures, as well as local/remote view definitions. The DBFs are primarily metadata for things like Stonefield Database Toolkit and the framework we use, Visual ProMatrix. Before I checked in my latest changes to these files, I decided to crack open SCCTextX.prg and take a look at what could be done. Lo and behold! There is already code to deal with DBCs and the beginnings of code for DBFs, which by default had been disabled. I thought to myself, “I could have something working within a couple of hours”, so I dug in. Three days later… I finally had a solution, but with caveats.

There was a reason the code for DBCs was disabled. The text file it produced was useless for diffs. After some trial, error, and experimentation, I ended up with modified versions of SCCTextX.prg and FoxPro’s GenDBC.prg . SCCTextX_Data.prg now calls GenDBC_SCCTextX.prg to generate a text file for DBCs. It expects GenDBC_SCCTextX.prg to be in the same directory as SCCTextX_Data.prg. I made two modifications to the GenDBC program. The first was to sort the entries, so they are created in a consistent order. The second was to parse CREATE SQL VIEW commands into multiple lines, which otherwise appear in GenDBC on one line, making it very difficult to see what has changed. I’ll tell you up front that the parsing is not very good, and definitely not as good as I have seen in other VFP products/projects, but I needed something simple and lightweight, and I find it good enough for diff purposes. Also, GenDBC is a little slow compared to other text file generation, but it wasn’t a showstopper for me. NOTE: GenDBC_SCCTextX.prg is only intended for source control diff purposes, and I do not recommend it as a replacement of the standard GenDBC.prg for creating databases.

Aside: If you look in SCCTextX.prg, you may notice the developers tried to change the extension for DBC text files from “DBA” to “DCA”. I agree with this change. Unfortunately, the VFP Project Manager forces and expects the DBA extension. If some aspiring developer were to create a fully functional replacement for the Project Manager on VFPX (hint, hint), this (and other limitations with source control integration) could be overcome. But as it stands, we have no control over it.

That takes care of DBCs, how about DBFs? Well, it turns out that the code included in SCCTextX.prg for DBCs is actually pretty good for DBFs. So, easy right? Wrong. The first problem has to do with the Project Manager integration. VFP doesn’t even call SCCTextX for files in the Free Tables section. That explains why we only have the “beginnings” of DBF support. However, we can trick VFP into calling SCCTextX by putting the DBF into the Databases section of the project. There are three ways to do this:

Add the DBF manually in the Databases section. VFP will complain, but the file will still be there.

Hack the PJX (USE MyProject.PJX) and change the type from “D” to “d” on the applicable files.

If the project is open: _VFP.ActiveProject.Files(“MyTable.dbf”).Type = “d”

Once the DBF is in this section, VFP will call SCCTextX and otherwise integrate properly with source control. SCCTextX_Data.prg is smart enough not to run GenDBC for files that don’t have a “DBC” extension. The text file extension for both DBCs and DBFs will be “DBA”, so you can’t have a table and database of the same name, but that wasn’t a problem for me.

So far, so good, but there are other issues with DBFs. You might want to exclude certain fields like ID fields or timestamps that change often and clutter the diffs. Or you might want set the order for the table to get consistent results. For this purpose, SCCTextX_Data.prg will call SCCTextX_Custom.prg if it exists in the same directory, giving you an opportunity to specify these settings. See SCCTextX_Custom – Example.prg in the download.

So now we’ve got text files for DBCs and DBFs, integrated with the Project Manager. Time for a quick build and… FAIL. Ugh! VFP doesn’t like the DBFs in the Databases section. Nothing a little project hook (included in the download) can’t fix though. It moves all non-DBCs to the Free Tables section before the build and puts them back afterwards.

With all of these caveats, I think it is obvious why I won’t be submitting my changes to the SCCTextX project manager at VFPX. That said, I’ll tell you it is VERY nice to finally be able to run diffs on these files. Definitely worth the caveats and overall effort for me. If you want to try it yourself, feel free to download SccTextX_data.zip.

On a related noted, Windows 8 no longer includes “Windows XP Mode”. That’s a curious choice by Microsoft as it could prevent businesses from upgrading. You can get similar functionality using Hyper-V, but it’s not as integrated, nor does it include the license for XP. NOTE: If you plan to upgrade a Windows 7 machine that contains XP Mode VM’s to Windows 8, you may need to take action BEFORE upgrading.

On the surface, this seems reasonable. After all, Windows XP was released 11 years ago, mainstream support ended in 2009 (extended support ends in 2014), and there have been three new Windows releases: Vista, Windows 7, and now Windows 8. Going back to 2001, there was a little bit of an uproar when Microsoft ended support for Windows 95. I remember thinking it was no big deal, because anyone with smarts long ago upgraded Windows 98/SE, or better yet, Windows 2000. Besides, it had been SIX YEARS! Did anyone expect Microsoft to support it forever?

In spite of being supported far longer than Windows 95, the sun-setting of Windows XP seems like it will have far greater impact. Maybe it’s that I’m older. Maybe it’s the fact that 40% of computers still run Windows XP (although Windows 7 finally surpassed XP earlier this year). Maybe it’s that even though XP is 11 years old, Vista wasn’t released until 5 years later, and subsequently avoided by the majority until Windows 7 was released in 2009. If you’ve got a PC more than 3 years old, there’s a good chance it is running XP.

That 40% is reflected in our client base, if not higher. Fortunately, Visual FoxPro runs great on Windows XP. However, we are starting to use SQL Server and .NET in addition to FoxPro. I have to choose between forcing clients to upgrade or using last year’s technology (not exactly cutting edge). I know you eventually have to move on, but it feels like Microsoft is forcing the issue a little too soon. Perhaps Windows XP usage will drop off sharply within the next year, but I wouldn’t bet on it.

For a while now, I have debated whether or not to post this entry. About six months ago, I wrote most of this article, then decided not to post it. So yeah, I’m wishy-washy on this one. Since then, Microsoft has put a lot of emphasis on HTML5 and introduced Windows 8 and the Metro UI, leaving a lot of existing MS developers wondering about the future of their previous technology choices. With that in mind, I think it’s good to look at Microsoft’s treatment of VFP and how their decision processes work in regard to development tools.

Can you believe it has been over four years since Microsoft posted A Message to the Community and announced that Microsoft would cease development on Visual FoxPro?It has been over three years since Microsoft released VFP9 SP2 and Sedna, and over six years since VFP 9.0 was released!While I was saddened that Microsoft chose to cancel VFP, I appreciated the sensitivity the Fox Team, particularly YAG, showed to the Fox community.However, at the time, I wished Microsoft would have been more transparent and given a more thorough explanation of their reasons for making that decision.YAG may have been constrained in what he could/should say in his position, and I imagine there was some disagreement with the decision within the Fox Team, but those are just guesses.Regardless, we didn’t get an official statement from Microsoft, other than it was happening, and the Fox community was left to piece together the reasons. That led to comments like “writing on the wall”, “head in the sand” and conjecture on lack of sales vs. lack of marketing, etc.Ultimately, it comes down to the fact that VFP was not a “strategic product” for Microsoft, but why was that and what does it mean?Answers lead to more questions, but I think that is worth exploring.

Note that this blog entry is just more conjecture/opinion.I don’t have any more facts than you do.I am just putting all the pieces I have on the table and building a picture.Over the past couple of years, I have debated whether or not to write this, because it is a negative subject and obvious flame-bait.But I think enough time has passed now and it could be a good thing.To move forward, you have to let go of the past, and this has helped me do that.I still use VFP as my primary tool, but this has helped me “get over” Microsoft’s decision.It may also be helpful in deciding where you want to go in the future.Now, I am in no way defending Microsoft or saying I agree with their decision.I am just trying to understand why they would make a business decision to discontinue FoxPro.Statements here may be “obvious” or “old news”, but I think it is helpful to pull it all together.

“Not a Strategic Product”

Enough disclaimers, let’s get to the subject at hand.Microsoft has stated for some time that FoxPro was not a strategic product for them.What does that mean?To my mind, a strategic product is one that Microsoft would invest in heavily and recommend as the primary path for their customers.MS would build upon the technology and form an entire “strategy” around it.VB was strategic.COM was strategic. .NET is strategic.Fox was not.Why not?To answer that, I think you have to look at why Microsoft bought Fox Software in the first place.

Microsoft was working on its Access DBMS which uses a modern variant of the BASIC language. It had to have been embarrassing for Microsoft to have such a glaring hole in its product lineup. They had no DBMS, and their partnership with Ashton-Tate failed to get Microsoft SQL Server off the ground. Some of the marketing types at MS realized that FoxPro was the best version of X-Base out there, and had been trying to talk Bill Gates into doing something about it. They knew that the X-Base language commanded a huge segment of the market and that a product which used the X-Base language would get them into the DBMS market in a big way. They had the marketing resources to put behind FoxPro, and Fox had some interesting and useful technology — not to mention some very talented people, the kind Microsoft likes.

The purchase of Fox Software for $173 million in 1992 was very strategic for Microsoft, and was the biggest corporate purchase Microsoft had ever made up until that time. Borland had purchased Aston-Tate’s which included dBase III and IV, and had Paradox. And growing in popularity at the time was PowerBuilder as the king of client/server tools, with Sybase releasing PowerBuilder 12 last year ironically based on the free Visual Studio Shell runtime. Microsoft needed three things from the Fox Software deal – the Fox developer team, the Fox technology, and the customer market share of FoxPro/FoxBase. Microsoft was just starting work on Access and it was more targeting power users, but there was still some overlap. Visual Basic was still in its early days.

Basically, Microsoft wanted a stronger presence in the database market and they were in severe need of database products, people, and technology.Fox Software was a perfect fit for them.To give you a little more context, in 1992 the xBase market was still booming, Access (Cirrus) was still in development, Visual Basic 1.0 had been released and VB 2.0 was in development, and the first release of SQL Server for Windows was not until 1993.

I think if you asked anyone “in the know” at Microsoft, they would tell you that the Fox acquisition was a resounding success (unlike other much more expensive acquisitions that Microsoft has recently dumped).They got a solid product, key technology that made its way into several other products, and valuable people that went on to take major roles in the company.Why then did FoxPro not share that level of success?I do not believe Microsoft had malicious plans to kill FoxPro from the beginning, but the landscape had changed, as it tends to do in technology.The xBase market declined.

xBase Market Decline

FoxPro was first and foremost a competitor in the xBase market.As that market declined, so did the value of FoxPro as a strategic product to Microsoft.What led to the decline of a technology that had been so popular in the 80’s and early 90’s?Technology trends are constantly changing, but here are few key things that in my opinion diminished the xBase market:

dBASE IV: dBASE IV was a buggy disaster, and it was two years before they released version 1.1.Borland bought Ashton-Tate, but could not undo the damage.dBASE for Windows was not released until 1994.This was good for FoxPro, which became the biggest fish in the xBase pond, but the pond itself began to shrink.

Lawsuit: Ashton-Tate sued Fox Software for cloning dBASE.The suit was dropped when Borland bought Ashton-Tate, but it could not have inspired confidence in the xBase market.

Client-Server: By the early 90’s, client-server technology picked up in popularity and developers were beginning to flock towards database servers and client-server development tools like PowerBuilder and VB.At the same time, Microsoft was trying to enter the server market with Windows NT and SQL Server, so I’m sure there was strong emphasis on this style of development from them.I believe there was talk of a “FoxServer” product at Fox Software, but it never saw the light of day before the Microsoft acquisition.

Those are reasons that the xBase market declined, but about now you’re thinking that VFP is so much more than an xBase tool.I couldn’t agree more.VFP can go toe-to-toe with VB, PowerBuilder, Delphi, .NET, and others.If FoxPro was supposed to “go quietly into the night”, someone forgot to tell the VFP 3.0 team, because they transformed the Fox into a full-fledged OOP development platform ready for the 32-bit world and beyond.So, why wasn’t the emphasis there from Microsoft?

An important point to make about Microsoft is that they are a follower of development trends, not a leader.With a few exceptions (the VB GUI designer comes to mind), Microsoft has not been the one to create a development trend.“Embrace and extend” was their motto, and they have done well with that.Windows was Microsoft’s answer to the Mac..NET is Microsoft chasing Java into the enterprise.They follow current trends and they do so mercilessly.Even now, Microsoft is emphasizing HTML5, leaving Silverlight developers thinking “Wait, I thought we were on the cutting edge?”It would be out of character for Microsoft to promote and strategize around a product built for a market that was trending downwards.It’s nothing personal against the Fox, it’s just not in their DNA.

FoxPro Market Decline

Even with the xBase decline, if FoxPro revenue had continued upward, I wouldn’t be writing this article.Sales declined, and there are several reasons for that:

Power Users: Going all the way back to dBASE you could question whether it was a platform for power users with development capabilities or a platform for developers that power users could use.It was both.Visual FoxPro put it squarely in the developer category, and Access took over as the preferred database for power users.The result: much fewer licenses sold.

VB, SQL Server, .NET: VFP faced a lot of competition from other products within Microsoft.With the emphasis always on the latest trends, many developers felt compelled to move to other technologies.

Visual FoxPro: That’s right, VFP itself.While VFP 3.0 was a massive improvement in development capabilities (and most of us are happy with that decision), it was also a big leap from FoxPro 2.x in terms of learning curve.It took some developers quite a while to make the jump, and some never did.

Not Invented Here Syndrome: Microsoft took a great product and made it even better, which makes their treatment of FoxPro all the more frustrating.But Fox was still the stepchild and it was never going to supersede other products developed internally.By the time Microsoft purchased Fox, they had already made significant investments in VB, Access, and SQL Server.Those would be Microsoft’s strategic products while Fox would continue serving the declining xBase market and otherwise fit between the lines.

Why 2007?

People had been foretelling the death of FoxPro since Microsoft bought it in 1992.What made 2007 the year when Microsoft finally decided to cancel it?Had sales declined to the point that Microsoft could no longer justify Fox development?Did they want to use the Fox Team in other parts of Microsoft?Did big customers move to something else?Were the people that cared gone or no longer in a position to do anything about it?Your guess is as good as mine.We will never know.

There are a couple of ways to look at this: 1) Microsoft always wanted to cancel Fox and they finally got their way, or 2) in spite of Fox not being a strategic product, Microsoft continue to create new versions for Fox developers.I tend to think of it as the latter.While there was always a question of Microsoft’s commitment to FoxPro, by the release of VFP 5, it had become clear that it would not be a strategic product.Per Ken Levy’s blog:

In the initial years after the Fox Software merger, Microsoft put a huge effort and lots of resources into creating VFP 3.0. There were about 50 people on the Fox team with a big marketing budget. In the following years, both Access and VB grew in market share and also competed in ways with the VFP market (and messaging), and by the time VFP 5.0 was released, many upper managers wanted Microsoft to just end VFP there. In fact, they did for a short time. I was there, in a meeting with 40 people, and the formal announcement was made to the Fox team that VFP was dead. It was very early 1996, and that meeting lead to the Gartner Group releasing their report that VFP was dead, which had a major impact on future VFP sales.

Most of Microsoft’s competitors would have ended it right there and VFP 5 would have been the last version.So, the real question isn’t “Why 2007?”, it’s “Why not 1996?”.Ken Levy continues:

But the Fox team members along with the community helped convince the developer tools management to keep VFP evolving while decreasing the resources. In fact, the primary reason VFP lasted another decade with 4 more versions released was more about Windows sales than VFP sales. There are many Windows machines running VFP apps. When Steve Ballmer jumps around like monkey boy and yells “developers, developers, developers”, he’s thinking about selling Windows and Office more than sales of developer tools.

If VFP 5 had been the last version, then I may have never had the joy of working with Visual FoxPro, because I really didn’t make the jump from FoxPro 2.x until version 6.0.In fact, I’m not sure where I’d be today, as I took my current job back in 2000 to upgrade a Fox 2.x app to VFP.So, I’m definitely thankful Microsoft saw fit to continue development.

That said, Microsoft’s handling of VFP support since the announcement has been appalling.VFP 9 SP2 introduced several bugs.After months of begging, we were able to get them to fix one key bug, but others remain that will never be fixed and must be worked around.Microsoft claims that VFP is supported until 2015, but I’m sorry, that’s not support.To be clear, I’m not blaming the Fox Team for this.I’m blaming Microsoft for the fact that there was no Fox Team and management was unwilling to provide resources to fix these problems.Real support ended when the Fox Team was disbanded and assigned to other projects.

So, what now?That’s the big question Fox developers are asking themselves or have already answered.I don’t know about you, but I continue to be extremely busy with Visual FoxPro as my primary development tool.I also keep tabs on new technologies as they are introduced with an eye towards how they could benefit me.Maybe that will be the subject of a future post.

Like many developers, I spent much of last week and the weekend watching the Microsoft BUILD conference and learning about Windows 8, the new Metro UI, and the Windows Runtime (WinRT). My first impression of the Metro interface was that it is compelling and I like it. That has faded a bit as I get into the details of what you can and cannot do with the new architecture. There is a lot to like about the Metro user and developer experiences, all of which you can learn about at Microsoft and the blogosphere. As a developer, I’m thinking along the lines of how I can use the new technology while preserving my investment in existing code and skills. Restrictions built into Metro (by design) have a direct impact on that. While highlighting all the cool new stuff, Microsoft has glossed over the restrictions a bit, so this post looks at things from that point of view. First of all, I should say that this is a developer preview, Microsoft is looking for feedback from developers on their design decisions, and some of those decisions may change before release.

The more I learn about Metro/WinRT, the more I realize it is “Windows” in name only. To start, there are no windows in Metro! Everything is full screen. While there are some things shared under the hood (like the Windows Kernel and file system), for the most part, Metro is separate and isolated from the Windows “desktop”. Metro competes directly with Apple iOS and Google Android in the tablet space, while also being available to PCs. That much is obvious, but being bundled in Windows, you might think you also get access to all the goodness in the Win32 and .NET APIs. That is not the case. This really is two operating systems in one box.

So, why not just sell Metro separately as a new OS? Microsoft already tried that. It’s called Windows Phone 7, and nobody’s buying. Even Windows Vista had 400 million users, so bundling Metro with Windows ensures there is a viable market for apps. Besides, we developers would freak out if Microsoft dropped Windows for something entirely new. Putting Metro in Windows is good all around.

A key point to make is that Metro is client-side technology only. In spite of support for HTML, these are desktop apps. The HTML is not served up by a server, but rather is compiled into the Metro app. Being client-side technology, there are a lot of things Metro doesn’t do. Microsoft expects your business logic and database access to be done on the server and exposed as a service that Metro apps can consume. Of course, they would be happy to host this service for you in Azure, but that is not a requirement.

How about the Windows Runtime? What exactly is it? It could have just as easily been called the Metro Runtime, because it exists for the sole purpose of servicing Metro apps. Under the covers, WinRT components are written in native C++ (take that Dev Div!), and interfaces are exposed using COM and metadata. This does not mean that COM is making a comeback in Metro. Microsoft has used COM (specifically the IUnknown interface) for years as a means of exposing native interfaces, and they are simply using that rather than reinventing the wheel. There are no type libraries, rather interfaces are described in a modified version of the .NET metadata format called winmd.

So, if you’re thinking about accessing the WinRT from FoxPro or other environment using COM, you can forget about that. In fact, this whole notion of automating other applications is absent from Metro. You cannot directly start another application and communicate with it from Metro. This applies to both Metro and desktop apps. You can send info to another Metro app via “contracts”, such as the Share contract in Metro. But you can only use contracts provided by Microsoft, and these contracts can only be initiated by the user, not in code. If you do need to communicate with another app, even on the same machine, then you’ll need to use network protocols, such as IP sockets, http, or web services.

With all this hype, what is Metro replacing? Something has to die, right? I’m sure each of us will draw our own conclusions on that, but I will say that Microsoft always emphasizes the shiny new thing. That doesn’t mean existing MS technologies will go away, or that MS will stop investing in those technologies. At the same time, Microsoft is clearly investing heavily in Metro and will continue to do so, at least until the next shiny new thing comes along. From a technical point of view, to the extent that you can port your code (based on restrictions) and that you choose to adopt the Metro design, Metro replaces almost everything. If every desktop app were rewritten in Metro, then it could even replace Windows as a whole. It’s hard to know what Microsoft’s aspirations are in this regard, but based on the targeted nature of Metro and its capabilities, I don’t see this happening, not for a long, long time, if ever. From the point of view of each technology…

.NET: I can unequivocally say that the .NET Framework is here to stay. Since Metro is only client-side technology, you will need something on the server side, and Microsoft will continue to push .NET for that. When building C#/VB apps, Metro does use the .NET Framework, however it restricts you to a subset of functionality. For example, you cannot use the System.Data namespace, which means no ADO.NET, which means you can’t access a database directly from Metro. You’ll need to build the data access into the server side using .NET. For what it’s worth, I believe Silverlight has the same restriction. Microsoft seems to be deciding what the .NET restrictions are based in part on what they did in Silverlight (more on that topic at http://channel9.msdn.com/events/BUILD/BUILD2011/TOOL-930C). Lastly, do I even need to mention ASP.NET? Browser-based web apps aren’t going anywhere.

Silverlight/WPF: As far as being deprecated, I worry about Silverlight the most. MS will be pushing Metro on the desktop and HTML5 through the browser. Some will say that WPF is already dead. On the other hand, PhotoShop was shown at the keynote and given as an example of an app that would NOT be appropriate for Metro, due to its dense UI design with a focus on productivity. Many line-of-business applications could be described in the same way, so there is still a place for these technologies. As it stands, there are new versions coming out for both, but beyond the next version, we don’t know. Now that the XAML group is part of Windows, Metro will no doubt get the lion’s share of the resources, but I hope investment continues on Silverlight and WPF.

Win32/C++: For the most part, C++ developers will be using WinRT in Metro rather than the Win32 API. MS does allow access to a subset of Win32 APIs that have functionality not addressed in WinRT (http://msdn.microsoft.com/en-us/library/windows/apps/br205757%28v=VS.85%29.aspx), but at this level, WinRT and Win32 really are separate worlds. That said, Win32 is the most used API on the planet. Even if MS doesn’t invest another cent into it, it will be here a long time.

Let’s talk about preserving your existing investments. Microsoft is focused on making sure your existing skills can be used when building Metro apps. That is, as long as those skills include HTML/JS, C#/VB, or C++. You might even be able to port existing code into Metro, but it’s not copy and paste, and it has to adhere to the restrictions. Microsoft demoed porting existing XAML and code from Silverlight and Windows Phone apps that were already written in the MVVM pattern used by Metro. If you want to move from a Windows Forms UI, I doubt there is much you can keep, and you’ll be rewriting the UI in Metro. From FoxPro, you don’t get to keep anything, except what you use on the server side. I’m not a C++ developer, but I imagine porting C++ code would involve replacing Win32 calls with WinRT functionality. I don’t know how feasible that is, so you may be able to bring in code that doesn’t reference Win32 (good luck finding some) and rewrite the rest to use WinRT.

As far as compiled libraries, you can’t just add a reference to a .NET DLL in a Metro app. You can, however, create a “Portable Library” that can be called from both Metro and standard .NET, as long as the library sticks to the Metro subset of .NET. I don’t know how this works in C++, but it may be the same story. MS showed bringing the Boost library into a Metro app, but I don’t know if that was Boost source code or a compiled library.

Moving on to user interface… Microsoft is providing developers with clear design guidelines for Metro apps. Designers may not like being told how to design their apps, but I don’t think it is a requirement that you follow the Microsoft rules. Personally, I am glad that Microsoft is providing guidance on Metro design, because we got no such guidance with WPF or Silverlight. How many times have you seen WPF or Silverlight session where the speaker says “I’m not a designer…”, and then proceeds to show a crappy looking UI that makes you wonder why you would want to use the technology at all? With Metro, I can use the MS templates, follow the guidelines, and end up with an attractive, functional app. If I want to do something extra special, then I can get a professional designer involved.

What about deployment of Metro apps? You will be able to use “side-loading” to test and debug your apps on a limited number of devices. Otherwise, you can’t directly install your Metro app on a device. It must be deployed through the online Windows Store. Each app you upload will be certified against a series of tests to ensure compatibility and compliance. So, if you’re thinking about using some tricks to get around the restrictions I’ve already mentioned, that may work on your machine, but the certification tests will fail and you won’t be able to deploy your app. Microsoft hasn’t released details yet, but most expect they will do the same as the Windows Phone store and take 30% of the price of Metro apps. You may have heard that the store will be free, but I believe that is only for the listing of Windows desktop (non-Metro) apps, in which case, MS just links to your site, but doesn’t handle deployment. There will also be an enterprise deployment option for companies that want to deploy apps internally. Apple charges $299/year so that you can distribute code YOU write to devices YOU purchased. That’s not a lot of money, but it just rubs me the wrong way, so we’ll see what Microsoft does in this regard.

So, Microsoft has Metro locked down pretty tight. Metro limits you to a subset of .NET. You can’t automate other applications or reference existing “desktop” DLLs. MS forces you to deploy through their store and takes 30% off the top. This is not characteristic of the Microsoft we have worked with all these years, but they are following Apple’s lead. At the same time, by exerting this level of control, Microsoft can ensure the integrity, stability, and security of the Metro experience. The restrictions prevent outside code from polluting and compromising the Metro environment. If that all seems a little too Big Brother to you, you still have the option of creating Windows desktop and HTML applications without the restrictions, both of which can still be touch enabled.

With all of this in mind, should you run out and rewrite all of your applications for Metro? Every day, I work on a FoxPro line-of-business app with a heavy emphasis on data entry and productivity. The fact that we’re still using FoxPro should tell you that we don’t jump at every new technology that comes out of Microsoft. But, even if this were a WPF or Silverlight app, I don’t think it would be a good candidate for the Metro style. Maybe that’s a lack of imagination on my part, but displaying info one screen at a time or with a bunch of panning or scrolling would cause a large hit to productivity. Now, are there pieces of our app that I could expose to a wider audience using a simpler interface with touch capabilities? Yes. And the developer and user experience in Metro is compelling enough that I might just target it over an HTML interface that works on multiple devices. I’m looking at Metro as an environment where I can offer new apps and experiences to users when it makes sense, not as a replacement for existing technologies.

Years ago, Intel added a HyperThreading feature to their CPUs before dual-core processors were available. More recently, Intel reintroduced the technology into their “Core i” series of processors. What is HyperThreading and how does it affect ParallelFox? Let’s start with the Wikipedia description:

Hyper-threading works by duplicating certain sections of the processor—those that store the architectural state—but not duplicating the main execution resources. This allows a hyper-threading processor to appear as two “logical” processors to the host operating system, allowing the operating system to schedule two threads or processes simultaneously. When execution resources would not be used by the current task in a processor without hyper-threading, and especially when the processor is stalled, a hyper-threading equipped processor can use those execution resources to execute another scheduled task. (The processor may stall due to a cache miss, branch misprediction, or data dependency.)

What this boils down to is that a single thread/process will not utilize all of the execution “slots” or “units” in a CPU core. This is especially true when the processor is “stalled”, meaning that the processor is waiting on something before it can continue. This may be due to the inherent design of the CPU, or because it is waiting for data to be accessed from main memory. HyperThreading allows a second thread/process to utilize the unused execution slots. Generally, this is a good thing and can provide a 15-30% performance boost to parallel processing.

However, in cases where there is heavy competition among threads for the same execution slots and other resources, HyperThreading can be slower than running a single thread on each core. The examples that ship with ParallelFox exploit this weakness. On a single-core HyperThreading CPU, the “after” examples are actually slower than the “before” examples. Of course, this was not intentional. The reason is that the examples simulate work rather than resemble real-world code. Here is the SimulateWork() function:

Procedure SimulateWork
Local i
For i = 1 to 1000000
* Peg CPU
EndFor
EndProc

While this code does a good job of pegging a CPU core at 100%, it also causes the same few instructions to be executed millions of times. With HyperThreading enabled, competition between the two threads for the same CPU resources is extreme. In a real-world scenario, there would likely not be this much competition for resources and HyperThreading would be beneficial.

As with most things, your mileage may vary. If you find that your code runs slower with HyperThreading, you can tell ParallelFox to use only half of the “logical” processors and start only one worker per physical core. Here is example code for that:

ParallelFox uses WMI (Windows Management Instrumentation) to detect if HyperThreading is enabled. WMI has shipped with windows since Windows 2000. However, WMI can only detect HyperThreading on Windows XP SP3, Windows Server 2003, and later versions, because that is when Microsoft introduced the required APIs. On previous versions of Windows, Parallel.DetectHyperThreading() will always return .f. even if HyperThreading is enabled.

There are several other features I want to add to ParallelFox, but at this point, I think it is feature complete for version 1.0. Also, very few issues have been reported from previous versions, so I am moving this release up to release candidate status.