Topics

Featured in Development

Peter Alvaro talks about the reasons one should engage in language design and why many of us would (or should) do something so perverse as to design a language that no one will ever use. He shares some of the extreme and sometimes obnoxious opinions that guided his design process.

Featured in AI, ML & Data Engineering

Today on The InfoQ Podcast, Wes talks with Katharine Jarmul about privacy and fairness in machine learning algorithms. Jarul discusses what’s meant by Ethical Machine Learning and some things to consider when working towards achieving fairness. Jarmul is the co-founder at KIProtect a machine learning security and privacy firm based in Germany and is one of the three keynote speakers at QCon.ai.

Featured in Culture & Methods

Organizations struggle to scale their agility. While every organization is different, common patterns explain the major challenges that most organizations face: organizational design, trying to copy others, “one-size-fits-all” scaling, scaling in siloes, and neglecting engineering practices. This article explains why, what to do about it, and how the three leading scaling frameworks compare.

Don Syme Answering Questions on F#, C#, Haskell and Scala

Bio

Don Syme has had a major contribution to the design of F#. He has also participated in the design of C# generics and .NET CLR. He joined Microsoft Research in 1998 and has a Ph.D. from the University of Cambridge Computer Laboratory in 1999.

About the conference

QCon is a conference that is organized by the community, for the community.The result is a high quality conference experience where a tremendous amount of attention and investment has gone into having the best content on the most important topics presented by the leaders in our community.QCon is designed with the technical depth and enterprise focus of interest to technical team leads, architects, and project managers.

I'm a senior researcher at Microsoft Research, in Cambridge and my passion of interests over the last 10 years has been to bridge the gap between what you might think of as more academically, research oriented languages - especially from the functional programming language traditions, but also more broadly that extends across all sorts of other areas of computer science in terms of verification and reasoning about programs and to take some of the ideas and actually push them into practice in the industry, make them fit the research.

The things go both ways: we are inspired by problems in the industry and we do research in the programming languages where there is a result and trying to take ideas and make them work in practice. I've worked on 2 big projects: one was .NET Generics in C# and Visual Studio 2005 and it seems like quite a long time ago now, and then, more recently, on F# as a programming language. We laid the foundation with Generics and we are moving along with F#.

That's a small part of the sequence. The visional design of the .NET platform was very much expected to be a multilanguage platform from the start. Right back in 1998, just in fact as our research group in programming languages started at Microsoft and I joined the team and then other 10 of us joined the team, we were approached by a guy called James Plamondon, who started the project called Project 7, which was about getting 7 academic and 7 industrial programming languages on each side to target the .NET common language runtime and really check out if it was good enough, to see if design changes could be made early on in the design process of .NET to make sure it was good enough for a range of programming languages.

Some of those design changes were made, like tail calls were, for example, were added in the first version of .NET and that was a very interesting project because they gave a lot of way to our group and researchers at Microsoft to make connections between the academic programming world and .NET. We have seen that there are a lot of people working on .NET over the years, and also let our group work directly on .NET with regard to .NET Generics and other proposed extensions to .NET - we got these researchers engaged with the system.

We sort of knew there were opportunities to do new languages, but to contribute to existing languages like C# and also to do new languages in this kind of context. We talked a lot about doing a systems programming language of some kind, something that would effectively sit between C and C#, that you'd be able control memory usage and maybe get safety properties at the systems programming level.

We ended up not pursuing that, although people are still interested in doing "managed" systems languages, say languages you could drive a whole operating system in or write large portions of Windows in. Then, at the other end, we are interested in doing these expressive functional languages. Haskell .NET was something we looked at closely, we always had in mind to do NML as well, there is a project called SML.NET, which is a very good implementation of standard ML for .NET with a Harley optimizing compiler and the like. We learned a lot from that - a great project!

I took what I learned from .NET Generics and saw that there was a chance to do a ML like language fitted very closely with .NET. During this time we had a go doing Haskell for .NET, we actually got a long way in doing that, but in the end there is quite a lot of dissonance between Haskell and .NET. The purity is one aspect of that so you are writing monadic code whenever you use the .NET libraries, which would be perhaps unusual, would lead you to writing more monadic code than you would like. Also, Haskell didn't have any tradition of adding object oriented extensions to Haskell.

There was one project called O'Haskell, which was interesting, but there was something about Camel which had a tradition of taking a call language and then making extensions to it, changing it, experimenting it, but taking a core paradigm that works very well and then making it fit to a context. In a sense, in Haskell there are too many points where the model wasn't going to work in the context of .NET and you get too much dissonance across the interoperability boundary in particular - actually there are some serious challenges in making it run well at all. That was definitely an interesting project and I think a few other people have tried the Haskell .NET, but we didn't continue with the project.

It depends what you mean by "seamless" and sure, that would be a word I'd use for any kind of interoperability. Certainly, when we use C# to F# components, F# to Visual Basic components and Visual Basic to C# it's amazingly easy to take components written in another language and use them inside a .NET application. Actually, I would say it's fairly seamless - I mean the syntax is different between the languages at the interoperability boundary of the .NET, then the F# concurrency and just about any C# API endured in a natural way.

We put a lot of work into that level to make sure that using .NET APIs is as seamless as it can be. The second half of your question is about semantics - at a fundamental level, the semantic model of F# is similar to the semantic model of C#. In fact, the problem with semantic models isn't that they are so much different, but it's more that you want, in functional programming, to have less - you take things away. By taking things away you get this "less is more" effect: you get programs which have less, in the sense that they are more pure, but, by getting that purity you get some properties out of your program.

You get to refactor your programs or you get to do equational reasoning between your programs, so you get to transform them in certain ways or you get these lovely properties where you can abandon computations and you know that they haven't had any side effects and the like.

Since the fundamental model we execute is IO code and you can do imperative programming in F#, but the approach we take is not to say we are semantically fundamentally different language, but the language is constructed in such a way to orient you towards doing functional programming and to using less state, it's a transition point, in a way, towards more pure programming.

The initial version of F# works extremely lean as we got to this sort of things. In fact, the very first version of F# you actually had to write inline IL assembly code to make any kind of call to a .NET library. We had no interoperability in the very first release we did of F#. Since then, it's been a long process where we didn't just decide to implement C# as a subset of F# - that would be one way you could do it. You could say "OK, there is a sort of subset which you can use". Maybe you even have a special syntax we have embedded in C# in your code where we compile that with the C# compiler or something like that. We didn't get on that road.

It was a slow process of assessing each construct in C# and either deciding that it had to be part of the core F# programming model and doing design work in F# that would make that fit as well as we can with the functional approach we were taking or, inevitably, you have to end up implementing the entire object model, the API model of the platform you are associated with. The reason why people use this high level virtual machines is so that they can write libraries at a high level - they are basically machines.

The purpose of the object model is to allow people to write certain kinds of libraries. In order to know how use the libraries you end up having to implement a full object model. Certainly, you can implement the full object model and then you have to be able to declare the full object model in the languages - that's just inevitable. There are some constructs we added a little bit reluctantly and some which we gave a very different and productive interpretation in F# - for example events, in C#. We didn't just take the notion of an event from C#; we instead created this notion of "first class events" in F#.

We can actually apply functional programming when you get an event stream, so a time you think of it as a functional reactive programming with stream events coming in and you want to filter that and you want to map it and transform that and folder cross it so scan it and collect some state and then meet new events. You can do that in this very nice compositional functional programming style based on these events coming in as first class objects, so you can take a form, get the click event and you can map it and filter it and the like and then you meet a composite event at the end.

There is this case when importing something it's actually like C# had a very deep insight it's like the imperative version of functional reactive programming and it's interesting to look at the parallels between the 2 events and behaviors. They put into the language a construct, which was unusual - it was new, but very fundamental in making C# a good reactive programming language in the sense of reactive and responding to streams of events. We took that C# insight and gave it a functional language interpretation.

It's an obvious decision to make in the context of .NET. Laziness brings a lot of challenges in a programming language. If you make it truly everything lazy, then you have all sorts of efficiency problems, all sorts of debugging problems, of interoperability problems. Often, traditionally people have designed their own execution infrastructure, so even getting code to compile to a different execution infrastructure like the .NET common language runtime can be quite challenging.

The question would really be what would you gain from laziness and I think that's well understood in the context of Haskell. Some programs do end up nicer when everything is lazy, but you pay a cost for that as well, which is suddenly everything is a computation. You have a function and it's getting an argument and says it's an integer but you might poke on that integer and it might never turn, so there is a cost. Everything is implicitly a computation. It's an interesting design choice and a very powerful one in the context of Haskell.

The thing you really gain in Haskell is you create this network of recursive objects, which is a problem that has interested me as a researcher in the context of strict languages. F# does have mechanisms - think of strict recursive objects that create a network of objects from the initialization graph and executes the initialization lazily, as a graph. You can't be sure that this process won't evaluate things before they are ready.

To answer your question: it would be unusual to make a language on .NET lazy; it would lead you to very many challenges. I think that's an open question. In the last 20 years, the world has become very platform centric, framework centric. We have Java and we have .NET and it's very hard to imagine a framework which only contains a lazy language and there is not strict programming around at all. There is no accommodation for some of your application to be written in strict, some of it to be written in lazy style.

That's an enormous challenge for those interested in lazy programming because it's one thing to have a niche language like Haskell - and it's beautiful and it's powerful - but it's another thing to say it scales up to the kind of programming that is done on these framework oriented platforms like Java and .NET.

We were having a chat before we started about design elements, like what process we use for deciding what would go into F# and what wouldn't go into F#. Type classes play a very interesting role in Haskell: they began life as a way of doing overloading, a way of making sure plus could work over different types that you want to work with. Then, they found it's actually a very useful way of propagating information through the program.

It's a lot of things you can do with type classes, which are very interesting, but it's also an area which it's not clear where the stable points are in the design path. You get these discussions about, like functional dependencies in Haskell, or overlapping instances or these diamond properties where 2 different modules declare different incompatible instances of the same type class and they conflict when they are both imported. There is quite a surprising number of technical problems associated with type classes.

We have once a clearly powerful mechanism, but there is this slippery slope from what it's originally useful, which is an important problem, which is overloading to very advanced functional programming, which is still not sure if will ultimately lead to a higher productivity in the end or not and lots of technical detail to look after. In the approach we've taken to F# we actually use the type class mechanism for a very weak version of type classes for overloading.

When you have a plus in F#, you will use constrained type inference, which is called from type classes to propagate these constraints around and eventually generate a witness for "OK, we've got an integer on the left, and there is an integer on the right - we must be doing integer to finally work out the way, we really do integer addition." In that sense, we use a mechanism very similar to type classes and it's a version of constrained type inference for the original problem the type classes was looking after with overloading.

It's been very much on my mind, as one potential evolution step for the F# language is to do later like this - the theme of type classes, unless you write these generic components, which are generic everything, like arithmetic types, and over other kind of sets of parameters: monads and so on. Those components have real uses. There are other ways of writing those components in F#; we can still write those components, you just pass the witnesses, the dictionaries explicitly and that works fairly well. In fact, there is a way to make it explicit because we've got interoperability with C#.
If C# wants to use one of these generic components that we write, they can actually instantiate it - they have to do it explicitly, of course. Type classes love the mechanisms fantastic and it's a matter of choosing elements along the design access, which are appropriate in the context of .NET and in the context of what F# is trying to achieve.

There is another problem that happens with the type classes, which a lot of languages have: you end up encoding quite a lot of the logic of your application in the type class system. It's actually like the type checkers are effectively running this Prologue interpreter halfway through your type checking and half your program ends up running a type checking time, which is great. I mean that's interesting, but in F# we try to make things simple and clear, so a simpler foundation anyway, and so we don't look for that mechanism quite so much as driven by ease in the Haskell world.

For me, F# is a fantastically productive language for doing .NET programming and I characterize it as data oriented programming and sometimes control oriented programming. So you are analyzing large log files on the data orientation side or you might be doing some parallel programming on the control side. I always like to think that some of the power comes from the language and quite a lot of the power comes from the platform. F# gets a lot of it's power from the richness of the .NET platform, but .NET actually gets a lot of simplification and power from the additional of a functional language to the platform.

We see the 2 as very synergistic: the F# is done in the same team as C# and, in fact, some of the same people and the designer of C#, Anders, has just recently given a talk on the future of programming languages and gave demos of F# halfway through his talk and I think he really clearly sees in his mind that C# is a wonderful mature component oriented programming language with an amazing set of tools inside Visual Studio and its Visual Studio incarnation - the monad implementations and the like, as well. F# just doesn't rival that language in the sense of the domains where C# is absolutely excelling and object oriented programming and our component oriented programming and tool oriented programming.

That said, we love investing in new languages and we see it as very important for the platform. It's not really about language, it says more about domain. There are domains where F# really does excel at, these sort of data processing domains and the like. It's those domains that you see people turning maybe away from the object oriented languages and turning to Python on every reason, to F# because Ruby and the like have different strings as well.

I guess the F# does pick up some of its support in the user base from those problem domains where people still need to use a typed language, but it's just not the greatest fit for object oriented programming. That really doesn't place them in competition - they are very complementary to each other. As a matter of fact, they integrate and inter-operate so well, really does mean that you can actually choose to tune your point, like if you are writing in mixed F# - C# application.

It's quite common: you do the presentation stuff in C# and you do some core algorithms in F# or core data processing and you can choose the tuning point - it's not so much about language, it's more based on the culture of your company or the expertise you have available or the legacy code basis that you might have. It's not a hard cut line that you have to do this here, there is a great patch in the middle.

We certainly laid foundations, which enables those frameworks. In the .NET world, people always have to face this interesting choice about "Do we do a generic multilanguage component that is going to work on .NET IL, for example, or do we do a component that is specific to a particular language?" I will expect to see a lot of existing components, say for instance object relational mapping kind of components - we'll see those working to target both C# and F# and Visual Basic and all the .NET languages. Maybe they'll do that, but it's generating C# code - it's quite common.
We definitely laid the foundations in the language for very interesting language specific frameworks.

We've had people at Microsoft Research for example do web programming frameworks where you could write both the client and the service side and you could mark up the parts of your program as being client or server and the client side was run as JavaScript and the service side was run as native code through the common language runtime. This is really a very interesting project because it showed how far you could get in creating this kind of framework and getting type checking across this boundary and it's quite a heterogeneous system you’re executing.

In fact, you could have parts of your service side program being database queries, which are actually run on the database, so you are going to get this 3D kind of application clients over database, in a very nice way. We'll see what people make of it. Our job is to provide the foundations and people are going to say all sorts of things. Just recently, there is a company called Avanade, running fragments of F# programs, a functional programming subset on FPGA circuits and that's fantastic. We've set the foundations to run the F# code in all sorts of interesting ways: on the GPU, on the FPGA, on databases.

First of all, I'm incredibly thankful to the Scala research group because I got to spend a month there, 5 weeks, last year as part of an extended visit and I learnt a lot that had a lot of influence on the design of F#, so I'm very grateful to Martin Odersky for that and for the advices given about F# along the way. I think people make the comparison because they embraced in some sense both functional and object oriented programming and because they are both seen as sort of next step languages. People make this C# - Java comparison and people make the next typed language comparison.

The emphasis is surprisingly different: Scala is very much about better component oriented programming for the Java platform. Although we do a good job of object oriented programming which is very nice in F#, we haven't thought to make fundamental improvements at the component level, in a sense. We are quite happy to say "You are making components? OK, make it a .NET component".

.NET has great APIs, we follow the guidelines to a .NET component. You can do more extra specific components if you want, but .NET lays a good solid foundation that is not a lot of point in deviating from as you write components. In a sense, you might say that the focus of Scala is programming in the large. I think they also do a very good job in programming as well. Don't get me wrong, but the focus of F# is probably more about that kind of very sweet core of the functional programming language and making that work in the context of .NET.

They're actually very surprisingly different languages and they are both making a contribution in different ways.

They are basically a way of writing lightweight agents in F# code. Basically doing user level non-blocking flows of control. You can mark up these points where you want to do an asynchronous read from the web and when that read comes back from the other side of the planet - when the response actually comes back - then you continue the program from this point. Or maybe you want to do several asynchronous requests in parallel and then bring them back. We can do all sorts of user threading work, easy in this mechanism and that's fantastic. It's actually built on the foundations of monads most famously from Haskell.

Monads were originally about taming side effects and then people discovered they are actually very good for doing monads for data as well. A mechanism that was originally used for side effects in Haskell ends up being used for queries in C#, which is about data and ends up being used for control in F#, which is asynchronous workflows in parallel and user level agents and the like.

First of all, that shows what a powerful kind of pattern the monad pattern actually is, but it also shows you can solve one problem through monads. You choose which one you want to solve: data, control or side effects. You can sometimes combine them a little bit, like transactional memory is a little bit about all 3 of those - data, control or side effects. That's where asynchronous workflows come from; that's what they do.
I find it a really fantastic addition to F# because it allows us to exploit these virtual machines, these great frameworks, these .NET and Java frameworks. It takes that exploit in that framework a step forward. It says "OK, the threading on those platforms is just too heavyweight". They map down all offerings and threads - that's great for the many scenarios, but for some kinds of agent-based programming that just not going to work. It means we get the benefit of these frameworks, but we can sit back and do agent level programming for those parts of the application where we need that. That will allow us to take these frameworks based on offering the system threads through another generation of trends in the computer industry, as more and more agent programming is needed.

F# is going to be absolutely fantastically enjoyable - it's really a lot of fun. It's not only the fact that we are shipping it, it's also for the fact that I get to work with the users; I get around our building in Microsoft Research, I get to see that it's making the people happier, making them enjoy their work more. People just tell me "Yes, I enjoy my work everyday now." And not because they are just programming more and more, but because they are actually doing their research work for finding a representation over in their programming more easily and with fewer bugs. That makes me really happy.

Another thing with programming languages is that they are great unifier of people all around the world, which I personally find very enjoyable. I get to work with people from their countries all over the world asking questions about how they use F# and different ways are starting off with using it - all sorts of different industries. .NET is part of that.

I found it really interesting to talk to one of the chief platform architects in AutoCAD, the Autodesk company, because they have .NET extensions for this AutoCAD tool and normally functional programming wouldn't get a reach into that kind of world, because it's on .NET and suddenly it's much easier to get these things to work together. King W., who has a great blog on F# and is running a session of the Autodesk University, which has 10,000 people supposedly go along to learn about AutoCAD, that is interesting. Just recently, F# has been used in the finance industry a bit and to get inside the finance industry in these time has been very interesting for me.

This is in the context of C#, this is going back to "Should we be extending Generics in C# in particular ways?" It's certainly possible. There are technical limitations of .NET Generics - say, you make for instance compiling things like ML modules or some elements of Haskell a little bit tricky or just can't be done correctly in full down to .NET Generics. Those were deliberate limitations in a sense, at that, when necessary, at the time the .NET Generics was designed in 2001-2002. There is definitely potential to extend .NET Generics.

On the other hand, we knew when we were designing .NET Generics in 2001-2002 that this was a sort of feature that we had to get into version 2.0 of the platform, not later. If we went into version 3.0, then the uptake of Generics would have been like 1/10 of what it is today - that's totally standard. If it's in version 2.0, becomes a total standard, version 3.0 becomes a much more niche kind of thing; version 4.0 is incredibly diminished in returns as versions of these kind of platform progress.
That is why we worked so hard to get .NET Generics into the system. It was actually exceptionally a close call whether that went in to the system or not. There was a memorable day when I had to travel back from Switzerland to Germany - it was my partner's birthday and I missed that birthday in order to get that final check in into the system to make .NET Generics work with free compilation in the .NET platform. That was actually very a close call: I missed that train. I would have gone in sooner or later, it was a very close call of whether we got that in to be complete in the way it had to be.

Although technically, we could definitely do extensions, it's tricky, the CLR is an absolutely core component - it's like the telephone system: you can't change it all of a sudden. You can make small extensions and you can develop it, but to make big radical changes about, to add really significant dimensions to the programming model that is used, is quite hard.

The other thing is that other generations of researchers and architects and engineers and designers and the like in the context may have a completely different analysis and they might say :"Now it's the time we have to do such and such to the .NET platform". For example, the parallel extensions are going to the next version of .NET. That's just like they said "Now it's the time to really lay foundations for parallel programming on the .NET platform". They are really putting it in there, they are making it a core part of the experience of the .NET platform and that's fantastic.

In other words, people do what they do - like I did with Generics -, other people come along and just add even more. Who knows what the next generation of people are going to do to that platform and to programming?

They will continue the process that we've been doing through the development of F#, which is that at the versions there is some polishing that needs to be done. We've been marking a lot of things as deprecated over the last few releases. We nearly always do that in ways where things keep working - you just get a warning message. The F# compiler has a surprising number of warning messages about all things: "You got to change your code. That doesn't work anymore".

It continues to work, but that's not beneficial part of the language going ahead. That process will obviously come to a completion in Visual Studio 2010, but definitely, from here to the release of 2010 we'll go through the usual betas, the usual process of releasing a Microsoft technology. We'll be making it clear if there are changes, design improvements, some kind of clean up or anything. Throughout this process we'll be making it clear what that cleanup is - it's a standard process of releasing from this point to 2010.

Yes, we've been having interesting discussions about that. We got some good ideas, not knowing exactly when we are going to action them. We actually have a design, which is very simple - you can explain it in a very straightforward way, which I think appropriate, like the design we have feels appropriate for F#. We don't have an implementation of it yet, I think the design looks about right. We have to resolve dynamic dispatches and runtime; we have to choose a default behavior of resolving dynamic words to look up and identify and the DLR has some default semantics for this.

The question is how much we have to vary that story for F# and what would the dynamic binder look like for F#. It's something where you look at adding; there are a couple of use cases where dynamic language features are quite compelling. In the dynamic languages there are many, there may be only a couple that I have on the top of my head when thinking about that feature, but we have to think about all the other work we want changes and additions - additions mostly that are made to F# - and we have to have a full plan for that in the version 1.0 and looking ahead to version 2.0 of the language as well. We'll see what comes.

Excellent interview!

some influence has been forgotten!

Your message is awaiting moderation. Thank you for participating in the discussion.

While the interview is interesting, I feel quite strange a strong F# influence has been completely forgotten here: OCaml, as F# has sometimes been described like OCaml with the addition of .NET Framework libraries.

This being said, I agree that F# power comes from the fact different programming paradigms are mixed into a single language, so that, if one does not fit for the targeted task, the others could fit. Multiple-programming-paradigms languages are raising, and IMHO it's a good news, because they are more helpful; and it would change infinite debates like "my functional language versus your OO language", because both paradigms would be into the same language: these debates would go into another level.

Re: some influence has been forgotten!

Your message is awaiting moderation. Thank you for participating in the discussion.

it would change infinite debates like "my functional language versus your OO language", because both paradigms would be into the same language

I don't believe in that. Can you imagine a language that supports both laziness and strictness equally well, type classes with all of the extensions, module system of ML, Lisp-style macros, both static and dynamic typing, a standard object-oriented system with both interfaces and mixins and Lisp-style multimethods? Both does and doesn't allow programmers to reassign variables? And both does (like Haskell) and doesn't (like everything else) have a separate type for non-referential-transparent functions? And then there is Prolog, which is something completely different. And some experimental languages like Agda with (as far as I understand) a programmable type-checker...

Re: some influence has been forgotten!

Your message is awaiting moderation. Thank you for participating in the discussion.

Oh and you forgot about Elephant2000 :)

I agree with you Dimitry and that is why I like languages like Haskell, Smaltalk, Forth, Lisp because I can think of them as a coherent and consistent set of concepts. Yet languages like F# and Scala might help people getting into Polyglot and multi paradigm programming :)

Re: some influence has been forgotten!

Your message is awaiting moderation. Thank you for participating in the discussion.

it would change infinite debates like "my functional language versus your OO language", because both paradigms would be into the same language

I don't believe in that. Can you imagine a language that supports both laziness and strictness equally well, type classes with all of the extensions, module system of ML, Lisp-style macros, both static and dynamic typing, a standard object-oriented system with both interfaces and mixins and Lisp-style multimethods? Both does and doesn't allow programmers to reassign variables? And both does (like Haskell) and doesn't (like everything else) have a separate type for non-referential-transparent functions? And then there is Prolog, which is something completely different. And some experimental languages like Agda with (as far as I understand) a programmable type-checker...

Well, I didn't want to say that with one ultimate language we would solve all debates, or whatever.

I wanted to say it's time to experiment, to cross old sterile frontiers, to change ever-during debates... while mixing programming paradigms into one language; - With a functional-only language, most programmers are reluctant to do OO.- With an OO-only language, most programmers face functional features as toy features or not-ready-for-mainstream features.

With multi-paradigm language (but not the ultimate one, of course), things would change and IMHO developers are going to be less reluctant to cross the chasm, as using a multi-paradigm language, the price is to change, from one paradigm to another one, would be little.

Re: some influence has been forgotten!

Your message is awaiting moderation. Thank you for participating in the discussion.

It would be nice to be able to create F# lambda in C# code. :)

let's say you do (x,y) => x+y in C# than you could do (x,y) -> (F# code here)

i'm sure it would be relatively easy to implement and support this in C# 5..

and it would push the use of F# a lot where there's a big need to manipulate data in complex ways...

of course, refactoring tool could extract that lambda to send to an F# class, and link the C# code in one swoop if you ever need to go deeper in F# with classes and all..

Anyway, F# is powerful but the syntax is very different of C#, compared to scala vs java and also java is old so there's a big need for scala but C# 5 have already a lot of functional tools so there is really not a lot to push F# against C# on the market..

The best would be to mix both so that people can code small simple F# lambdas first before going with more complex F# programming.

Also it would push people to learn the interoperability limits and corresponding types of both languages.

the support for all this is already there but it might not make everyone happy.

awesome interview, why haven't more people read this?

Your message is awaiting moderation. Thank you for participating in the discussion.

I'm surprised this interview hasn't gotten more exposure! it is awesome. No doubt F# and Scala will always be compared. Sadek Drobi is completely accurate that languages like Scala and F# have turned younger developers (like myself) to follow the polyglot school of thought. Again, great interview!