This is syntactically invalid, and I see no easy way to make it syntactically valid C or C++ . The closest I can think of are Apple's blocks. So you could write ^ { printf (...); } instead, which is no big deal. But then. How do you capture the value of Var and send it over?

In order for this to work, you need a fully homoiconic language, where you can transmit the code and its data over the wire, and where there is a way to reconstruct it on the other side reliably so that you can execute there. I'm not saying you can't modify C to get there, but certainly not easily. And in any case, it's not just a library, and not in any language.

The planned security model is to show only those features that are available to a given user. Say that temperature is available to anybody, but self_destruct requires a special privilege. Then anybody connecting to the device can request "temperature" and gets a response, but "self_destruct" is not even in the symbol table, so no way to access it. Trying to use it results in a run-time error, just as if you had tried to call schtroumpf.

If you want to access a privileged feature, you do something like import "self_destruct". And that checks if you are allowed to import it or not. If you are, then your symbol table is populated with self_destruct and you can call it. Otherwise, run-time error as above. This is not implemented yet, but is definitely on my to-do list.

Another validation that I plan to implement is the validation of "reply" code. Since you sent the code including the possible "reply" values, you can check on return that only a valid reply is sent, and reject any reply code that does not match one you sent.

Regarding encryption, I'm still thinking. I'd like something very lightweight for performance reasons, e.g. XOR with a one-time pad.

descubes writes: ELIoT (Extensible Language for the Internet of Things) is a new programming language designed to facilitate distributed programming. A code sample with less than 20 lines of code looks like a single program, but really runs on three different computers to collect temperature measurements and report when they differ. ELIoT transforms a simple sensor API into a rich, remotely-programmable API, giving your application the opportunity to optimize energy usage and minimize network traffic.

Using less resources than Bash, and capable of serving hundred of clients easily on a Raspberry Pi, ELIoT transparently sends program fragments around, but also the data they need to function, e.g. variable values or function definitions. This is possible because, like in Lisp, programs are data. ELIoT has no keywords, and program constructs such as loops or if-then-else are defined in the library rather than in the language. This makes the language very flexible and extensible, so that you can adapt it to the needs of your application.

The project is still very young (published last week), and is looking for talented developers interested in distributed programming, programming languages or language design.

The language alone is not good enough, but it is simple to share. By contrast, building a complete web browser today is a bit difficult, and even a smaller "graphic" language like Tao3D is not that easy to build, in particular if you include all the dependencies. For Tao3D, you need Qt with WebKit, OpenGL, VLC, XLR, LLVM and I forget half a dozen. So I think that exposing the language-only part is interesting. For a while, Tao3D was the same project as XLR, but we decided to split early on. We wanted XLR to remain a non-graphical, non-reactive, non-networked, easy to port language.

While Go and Swift are interesting incremental improvements, they are not taking into account what we learned about programming languages. In many ways, these two languages seem firmly stuck in the 1980s. For example, Go has no generics, and as far as I can tell, Swift still does not have the kind of true generic types I introduced in XL in 2000, i.e. the possibility to call "ordered" all types that have a less than, and then define functions with "ordered" instead of having to use <T> all over the place just like in C++ (and please, could we stop using angle brackets?)

More generally, there was a lot to be learned from more dynamic languages deriving from Lisp. Being able to treat code as data (homoiconicity) completely changes things. It means your language can be extended in itself, just like Lisp integrated object-oriented capabilities effortlessly. It means you can do metaprogramming, introspection, reflection, dynamic code generation, in a natural way rather than with specialised ad-hoc features. All things that Go or Swift spectacularly fail to do.

I fail to see benefits of a similar order of magnitude with Swift or Go, and it annoys me. Companies like Apple and Google have the means, if only the financial ones, to make bigger things happen, in particular when smaller teams like ours already did a lot of investigative work.

I'm crazy enough to believe I have found a path to unification that is actually quite simple: add a new relativity principle that states that laws of physics must be the same irrespective of the measurement instrument we use. Here is a parallel:

- Special relativity states that the laws of physics must be the same irrespective of your state of motion. So a complete description of an experiment must include which referential you are using. There is no absolute space, no absolute time, no aether. And we need to add new transformation laws from one referential to the next, which are Lorentz transforms.

- General relativity states that the laws of physics must be the same irrespective of acceleration. So a complete description of an experiment must include accelerations, including gravitation. There is no flat space-time anymore, but something that is curved by gravitation fields. So we need to add new transformations from one curved space-time to another, use tensor math, covariant and contravariant quadrivectors, etc.

- My still incomplete theory of incomplete measurements (TIM) states that the laws of physics must be the same irrespective of the measurement instruments used. So a complete description of an experiment must include which instruments were used, including calibration and range. Just because two instruments are calibrated to coincide on a given range cannot be used to postulate that they match at any scale. Space, time, mass and other measurements are no longer continuous, but discrete (because all our physical instruments give discrete results). We need to add new transformation when going from one physical instrument to another, which correspond almost exactly to renormalisation in quantum mechanics, but give an explanation as to their origin.

The TIM focuses on what I learn about a system using a physical measurement instrument. This starts by defining what an instrument is:- It's a portion of the universe (i.e. it's not "outside the matrix")- which has an input and an output (e.g. the probe and the display of a voltmeter)- where changes in the state at the input yield a change in the state of the output (change in voltage result in changes on the display)- which ideally depend only on the input (the voltmeter picks the voltage at the probe, not somewhere else)- and change the output (nothing being said about the change in the input, since even macro-scale experiments can be destructive)- the change in the output being mapped to a mathematical representation (often a real number) through a calibration

The instrument gives me knowledge about the state at the input. Since the instrument has a limited number of states in the output, my knowledge of the system through this instrument at any given time is described by a probability for each of the possible states. If I have N states, the probabilities p_1...p_N are all positive, and their sum is 1. So the knowledge state can be represented by a unit vector in dimension N.

For example, if I care about "is there a particle here", the possible measurements are "yes" and "no". The knowledge state is therefore represented by a unit complex number. If now you want to answer that on a plate with 1 million possible positions, you have a field of 1 million complex numbers, with the additional constraint that the particle must be at only one position (which is expressed as the sum of the probabilities for all "yes" being 1). That field is remarkably similar to the wave function, and this reasoning explains why it is complex-valued, why it is a probability of presence, and why it collapses when you know where the particle is.

But the primary difference with QM and GTR is that space-time is no longer continuous. It is discrete, and the discretization depends on the instrument being used. Because it is discrete, there are never any theoretical infinities in the sums you compute (these infinities being the reason why QM and GTR are considered fundamentally incompatible).

Here is a layman view of the incompatibility between QM and GTR. Imagine ants that try to define the laws of physics on earth. They setup rules, e.g. their anthill is only at one place in the universe, so the sum of the probability to find the anthill over all of space-time is 1. But if they now start realising that the earth surface is not flat but curved, now the method above does not work. If you go to infinity along the surface of earth, you "count" the anthill multiple times, so your integral, instead of being normalized, diverges to infinity. It is only an analogy, but it is an interesting one.

jfruh writes: Microsoft's acquisition of Nokia's handset business was mostly focused on gaining a hardware line that ran the company's Windows Phone OS; but in the process, Microsoft also gained ownership of some model lines that are classified as "feature phones" and some that are straight up dumb, and they're still coming out with new models, confusingly still bearing the "Nokia" brand. The $20 Nokia 105 as billed as "long-lasting backup device" and comes with an FM radio, while the $30 Nokia 215 is "Internet-ready" and comes with Facebook and Twitter apps.Link to Original Source

that thorium reactor is fission, not fusion. Not exactly interchangeable.

Obviously, which is why I wrote thorium / fusion, with a slash. You want the combo. Jumpstarting a fusion-only reactor from the wireless power line? That takes forever! Last time I checked, you need at least two to three frigging minutes!

A thorium reactor, on the other hand, is a good little backup, underpowered, sure, but largely enough to fire up a Fusion Drive 6G almost instantly. Also, many small thorium generators fit in your pocket, whereas even the latest Mr Fusion are big enough that you need a car to haul them around. So when I want a senso-holomovie on the beach, I always carry a little thorium booster with me, just drop it n the seawater for a few seconds, and I'm good to go!

Also, I forgot something essential in my list. You probably want a temporal adjustment controller. I just realized mine is on the fritz, and I'm no longer sure which year I'm in. Can you imagine if you make a mistake and talk about recent technology to, say, early 21st century Slashdotters? That would be cruel.

How would you otherwise teleport when the martians attack and the grid is down? Plus, I think that if you have a quantum teleporter, having at least a couple of terawatts locally is basic construction code in most places.

Founder and CEO of Taodyne (interactive 3D)
Architect of HP Integrity Virtual Machine, HP's big iron virtualization for Itanium.
Creator of the XL programming language (http://xlr.sf.net), ultimate metaprogramming.
Author of Alpha Waves, the very first 3D platform game.
Author of several video games for HP-48, including PacMan and Lemmings.
http://www.dinechin.org/christophe/Resume.html for more...