Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

New submitter omar.sahal writes "Bret Victor demoed the idea of instant feedback on your code. ... Allowing the programmer to instantly see what his program is doing. Chris Granger has turned this novel idea into Light Table — a new IDE designed to make use of Victor's insights."
The screenshots make this look like it could be genuinely useful — like a much fancier and more functional combination of features from SLIME and Speedbar. There's a Google group for those wanting to track development. There's no code yet, but source is promised: "I can guarantee you that Light Table will be built on top of the technologies that are freely available to us today. As such, I believe it only fair that the core of Light Table be open sourced once it is launched, while some of the plugins may remained closed source."

I'm happy to announce that we submitted our Kickstarter earlier today and are simply waiting for it to be reviewed.

In other news, to save everyone the time, I'll point out that 100 people are going to post the lighttable does what smalltalk did in the 80s. As with all IT and most CS stuff, there really is nothing new under the sun, just recycling. That doesn't mean its bad, or reimplementation of a good idea is bad, just that it isn't new.

Files are not the best representation of code, just a convenient serialization.

I've been thinking about this for a while and I think we do need a new generation of IDE which isn't based around showing source files in tabs, but rather code snippets (functions, class definitions etc.) on some kind of desktop. When I'm debugging code I don't want to jump through X files, I just want to see the X related functions so I can understand the programs flow etc.

Another benefit of moving away from explicitly managing files is that the computer is probably in a better position than the user to decide how to present the code to the compiler / linker. It could also have benefits in source control where you could track the history of an individual function better (imagine if someone refactors a function from one file into another).

So, you want Smalltalk code browsers [onsmalltalk.com]. This IDE concept is nothing new, Smalltalk had that kind of code browising from the start and the concept of a live image where every code change is done in a live vm. The only thing I see new here is some "modern" "HTMLy" UI

So, you want Smalltalk code browsers [onsmalltalk.com]. This IDE concept is nothing new, Smalltalk had that kind of code browising from the start and the concept of a live image where every code change is done in a live vm. The only thing I see new here is some "modern" "HTMLy" UI

If a Kickstarter project and a new IDE is what it takes to get these ideas more commonly used, as a former smalltalker, I'm all game. The "live VM" idea of Smalltalk was probably way ahead of it's time - with JITs and a much higher baseline of compute power even in smartphones, it's now high-time we start seeing code as beyond text files or db blobs.

I'm still waiting for a non-smalltalk VM to feature the power of the walkback.

... The "live VM" idea of Smalltalk was probably way ahead of it's time - with JITs and a much higher baseline of compute power even in smartphones, it's now high-time we start seeing code as beyond text files or db blobs.

I'm still waiting for a non-smalltalk VM to feature the power of the walkback.

Which would (and can) still be retained in the underlying source code files. The visual representation at the IDE level *does not have* to be viz-a-viz with actual physical textual ordering of definitions and declarations.

How to get that in a useful manner, that's another question. After all, tools and enviroments like VB, VFP and PowerBuilder attempted to show code in snippets as opposed to walls of text. Attempted they did and the results were mixed. Sometimes it helped, sometimes it got in the way.

So, that is the trick, in the delivery of the concept. But the concept itself, it is not computationally impossible, not even with a PL where textual order matters.

Responding/adding to my self:

1. Why does the screen shots have to sample LISP code?:)

2. Tools like this *might* enforce people to functions and methods that are smaller, with lower cyclomatic complexity and with better composition, cohesion and structure. It would be very hard to visualize a 300-line long spagetti wall-of-text-function;)

It would be very hard to visualize a 300-line long spagetti wall-of-text-function;)

Not much harder then visualizing 300 line long functions (plus one more to glue them together).

This IDE sounds all exciting as long as you're working on a program that displays "Hey Chris!", not so much if you think of larger projects.

You are missing the point (the one about size of unit of code and complexity): A system consisting of 300-line long functions/methods/proc/anything is a PITA to work with. Yes, it could be visualized, but it will be of little help to the poor soul that happens to inherit such a system.

Or maybe I am misunderstanding the point you are trying to make. I'm being honest, we could be talking about completely different things.

Files are not the best representation of code, just a convenient serialization.

Trivially true: files aren't the "best" representation of code because the definition of "best" depends on context and goals (which shift constantly during a work session). That's a sort of non-claim. Absolutely true: files are a convenient serialization of code.

Some folks will look at the trivially true claim and think "Boo files! Let's do away with files altogether!". Then they will go off and develop something that throws away the absolutely true part of the claim [I'm looking at you Squeak, Centura SQ

It's probably less faddish than you think because a metadata based system encompasses files completely. For example, each tab in your IDE could instead be a keyword which filters out the functions/variables that come under that category. By removing the filter/s, you have a 'document' which contains all the code in the entire project.

Thinking files is ultimately a good idea is almost as bad as saying each HTML page should imitate the A4 size of paper, rather than a (possibly) infinitely long page.

I was surprised they would come out and say this. Who the hell is going to donate their development time on this project if the creators are going to close down parts of the platform and charge them for it?

Beside the usual runtime inspection of data structures, you can evaluate expressions in the context of the app being run, even those not existing in the app itself. And the evaluation includes calling the app's methods and modifying its state.

If you want to search for a particular function, well, that's why you've got Google open in another window; it's a lot nicer than messing around in the IDE.

Or use Eclipse's search, which has several options for this (don't know VS).

If you want to see how data is flowing through your functions, you can watch variables, which are less confusing; in the demo, variables get replaced with the current literal value, which might make you forget that there's actually variable there after a long coding session. I do admit that his idea of keeping track of what every function's state was when it was initially called and displaying that is interesting, but I'm not sure how that would work with loops or recursion (do you show the looped function multiple times?)

Eclipse's debugger remembers parameter values, too. You can use that for the "drop to frame" step action.Which set of values you get in a recursive call depends on which stack frame you drop into.

The only really interesting thing is this idea of pulling related functions next to each other so you can look at them all at once, but I'm pretty sure the rest of the functionality isn't very novel.

And you can do that manually in Eclipse by playing around with docking several editor windows side by side. BTW, Window->New Editor gives you a second editor for the same file.

Live debugging seems cool, however, basically every other feature is already implemented better in Visual Studio, Eclipse, or Netbeans. Hell, I have 95% of the functionality in Vim already. Why not just make the live debugging a plugin to one of the more mature editors? It seems you would get a whole lot more bang for your development time that way.

In Java you can't make changes to a class structure (methods, parameters, etc) and still use "edit and continue". I assume the situation is even worse in C++. Thus it's more useful in debugging than development.

Most Common Lisp systems compile functions on-the-fly, and this means also each time you edit a function. Some C++ IDEs can also recompile the compile-unit on-the-fly as you go. Sure, recompiling a Lisp function may be a tad bit faster than recompiling a whole C++ compile-unit, but you won't notice this in practice, because compile-units tend to be pretty small(ish) nowadays.

I'm a dynamic languages fan and even I take offense at that. Static languages can do anything dynamic languages do and them some, they are just more verbose about it. It should be possible to adapt this IDE to C++ for instance.

You should upgrade your display. Since I switched to a 27" display, I don't feel like screen space is valuable anymore. Very, very rarely do I use anything in fullscreen, because I simply don't need to.

Back when 14" was the standard, screen space was really valuable. Today? Get a bigger display.

Yeah, just the fact that you said '27" display' tells me everything I need to know about you.

Shoe size? A/S/L ?

There are times I've considered a fourth...

For some people it's cars...:-)

I had a 21" as a secondary display for a while, but it turned out that I didn't really use it. But then again, I know enough about the mind to understand that multi-tasking is an illusion and focus makes you productive.

Running 5 different programs -- that'll get you nowhere fast. Having all three of the files you're currently working on open for easy reference without switching windows improves productivity considerably.

You should try actually learning something before questioning those smarter than you.

Glad to see the realtime filtering of documentation too. That's something that's been missing from not just programming languages, but software and Windows in general (my own documentation [skytopia.com] for a program called opalcalc takes this to its logical conclusion at the bottom of the page).

What I'd really like to see though is not just real-time filtering of functions/methods/variables (each with their own metadata, so a word such as "exit" can be associated

I like the idea of working on a function at a time, not a file at a time. Code folding in VIM gets me part of the way, but searching for text unfortunately includes folded text and automatically opens the fold for me. Not what I want. Technically, functions shouldn't be that long to have to search but in reality, one has to work with 'fine' code sometimes.:)

What exactly is bad about finally packing up all those new ideas? I'd rather not use 9 different IDEs for the 1 cool thing each does, and besides, once you get a bunch of things together, it's often more than the sum of its parts.

It doesn't have to mean "this sucks ass" to be a negative or dismissive-seeming comment. Asking "what's new?" implies that if there isn't something new, it's not worthy. Maybe that's not what the poster intended, but that's how I, and apparently at least 4 mods, read it. And posts toward the top of the page tend to be either +5 or -1 until the story drops off the front page when the fine-tuning mods don't get drowned out, that's just human nature combined with Slashdot's moderation system.

Watch this, his lectue and demo and then tell me it's the same as we already have, and that a man charged with designing new forms of human computer interaction at Apple didn't know this. Also please respond with why he wasted our time telling us something that already comonly exsisted in the software world, as well as how the confrence organisor and who ever aproved posting missed all this. https://vimeo.com/36579366 [vimeo.com]
I was happy to see my post on slashdot. It's quite heavily edited, but this has improved the post. One question for slashdot, is the reason many posts get rejected due to posters needing heavy editing and this not having been done in the past.

The creator was a PM on Visual Studio. He's had quite a bit of experience as both an IDE user and developer, so I'm willing to give him the benefit of the doubt that he's spent quite a bit of time thinking about usability.

While I really like VS, there are some irksome points... namely that plugins are now fairly plentiful, but when it slows to a crawl, there's no way to tell which plugin is the culprit. Second is that half the times it crashes, it loses my settings.. I like VS on my left monitor with the nav panels on the left... code to the right, and my right monitor for running/previewing. It resets and that's annoying. That and the rare occasion I'm supporting a VB.Net project it crashes 5x as much.

Unit testing isn't worse, but it's equally bad. If you can predict every combination of parameters relevant to the function, then your unit tests simply tell you when you broke your own function. I don't need help changing three lines of code. That was never the difficult part. I need help conceiving of complicated structures and not missing a given element.

If I could write the unit tests properly, I didn't really need them. So you wind up needing to test your unit tests for conseptual validity. It's

you ever seen a really big project that was 100% unit tested yet full of bugs and full of methods which were implemented so that you could only use them for the one thing they were used for despite them being api calls with generic names which many 3rd parties would depend on when doing their sw? it makes you think that the whole project is just some evil conspiracy.

unit testing, in real life, just proves that in an isolated environment all the codepaths can be executed with some parameters that don't crash

I don't write doom engines. you're trying to attribute my words to every single project in the world? You think that's sensible? My words and my opinions and my experience are valid for my world, the projects that I do -- oh and since I get to hand pick the projects that I do, they all fit, every time.

You might want to try taking other people's advice for what it is -- their experience. So sorry that you'll need to adapt it to your environment. If you want someone else's experience to come with an API,

If you can predict every combination of parameters relevant to the function, then your unit tests simply tell you when you broke your own function

If you seriously think you need to "predict" such stuff, then you need to go back to school. Guess what: there are ways of writing tests that are formally guaranteed to exercise every conditional in the source code, or, if you go deeper, every conditional in the assembly output of your compiler. Never mind people write and maintain such test suites as a matter of everyday work. It's only magic if you're clueless, you see.

If you write complex code, where the flow path of one function can change the flow path of another function, then you get to multiply each of your tests by the number of other tests. Making each function that you unit test exponentially grow the number of unit tests that you need.

So adding a third function, means adding more test cases to the previous two functions.

Yes, I know this makes no sense to you. Because you've never written a function that isn't self-contained, you've never enjoyed the benefits.

If you write complex code, where the flow path of one function can change the flow path of another function, then you get to multiply each of your tests by the number of other tests.

That's simply not true. As I've said, it's a problem that has been solved long time ago, you just never bothered to read up on the computer science side of CSE.

Yes, I know this makes no sense to you. Because you've never written a function that isn't self-contained, you've never enjoyed the benefits.

How on Earth would you come to this conclusion is beyond me. Duh, I have plenty of code where a bug in one function can affect the execution paths of tens or even hundreds of other functions. We all have such code. Break your string library for a good example of this magical "flow path of one function can change the flow path of another function",

As I've said, you've never written functions that depend on the insides of other functions.

I don't at all mean your string library. I mean one business-logic function affecting another business-logic function. For example, write a neural network algorithm, and process different node types using different functions. Notice really quickly that the internal actions of one function doesn't break the other function but drastically changes what it does. It's a correct result, but an undesired result.

Right, so you can't unit test for correct but undesired results. Because the only thing that makes them undesired is the scenario at large, which has nothing to do with the input parameters to that function, nor anywhere near it.

So what unit test are you going to write? There's nothing to test. Yes when the input is six, the object will turn left. Yes, that's correct. But the input shouldn't have been six. Yet the previous function output six was also correct. It saw a six, and output six.

I think you're missing the idea that the project and it's requirements and the ways in which features work drastically changes from one day to the next. Think about it as an entirely different project, that you won't be paid again for. Trying to fabricate a reduced example is rough here, but imagine buliding a product catalogue web-site for a grocery store, with foods and a set of tax laws and organizational structures and delivery.

Six months later it changes to a furniture store, with photography, differ

Quite the opposite. a) clients and project requirements change year over year as they grow. So you need to change the function without changing everywhere it's called. b) the more intelligent the function, the more it can do. You, as a human, deal with varied input all the time. That's a part of being intelligent.

For example, something as simple as boolean = isBlank( string ). There are at least a dozen ways that a human being considers a string to be "empty". some of those depend on what that string

once you go down the road of objects, I can't help you. enjoy your object hell. enjoy your definitions no where near the actual business logic that you write day to day. enjoy things acting differently from one minute to the next. I won't help you.

but the next time you, as a human not writing code, get handed a food and asked if it's a fruit, you can create a new neuron called NamelessFruit, and then check if it satisfies your seed clause. Me, I'll just look through my index of what makes a fruit a fru

you'll also, very quickly in my world, have more than three dozen isBlank() functions, all very similar, but completely different.

Sure, and also localized to their class definition, so you'll automatically use the right isBlank method at the the right times. Under your approach, you'll need to define the same three dozen isBlank functions, but they'll be all mixed up in each other, and you end up relying on your global case checking code (apparently consisting of a maze of if..then statements) to decide which one is relevant. If the possible calls are in anyway complicated, this case-checking code will be a huge source of bugs.

oh come on, I assumed you knew how to build logic-based intelligence functions without a maze of if statements. it's never about a different flow path for each input type. it's about quick-converting any input type into a lesser or greater type to match the logic gate. a little bit of basic math takes those logic gates to incredible speeds.

so it becomes more stable, not less. and making a change to the concept of the function means changing all types identically and instantly.

oh come on, I assumed you knew how to build logic-based intelligence functions without a maze of if statements. it's never about a different flow path for each input type. it's about quick-converting any input type into a lesser or greater type to match the logic gate. a little bit of basic math takes those logic gates to incredible speeds.

Any compiler supporting classes and some notion of an interface will do that job for you. You've made, as we say in Polish, a pitchfork out of a needle. You're extolling handcrafting code for what the compiler should be doing for you, and you're under impression that tightly coupling a lone function to a bunch of different object classes is a good thing. Your posts are full of well-known antipatterns and I would not want anything to do with your code or your coding skills.

The claim

you're going to expect me to charge my client for the two hours of work that it's going to take to adapt a few megs of code throughout a project

didn't originally intend to be. but it kind of turned out that way. my only intention was to point out that tiny functions calling tiny functions that each don't do anything complex can easily be mapped to helpful IDEs that help with trivial structural things; but that properly complex code and experienced programmers don't gain insight by having visualizations and references; we gain insight through rognitive compression -- which is actually thoroughly destroyed by additional information.

You don't understand what I mean when I say quick-converting input types if you think that a compiler can do it for you. I mean taking a three megabyte hash structure, and converting it to the number six in a way that only means something to the isBlank function.

All of the antipatterns with which you're familiar are antipatterns because they make it difficult for others inexperienced with the project to work on the code. That isn't a priority in my world.

See, that's how clients react. And since you didn't get the money ahead of time -- they wouldn't give it to you -- you just wasted all of your development time, and your sales effort time, and you got nothing. That's a hell of an opportunity cost.

Dude. you don't know what the word accountable means. When there's a bug, or the code doesn't do what the client thought it would do, do you still get paid? When the client loses money, or doesn't make the money that they expected to make, do you get paid? That's accountability.

You make up some kind of fictitious "sort of signature". My name's on all of my code. It's on the quote, the estimate, the contract, and the invoice. And you'd better believe that my code requires a hefty amount of effort to le

about the same as if anyone's supplier ever drops them as a customer. in my case, if I get hit by a bus, someone needs to spend a real amount of time, about a week, to learn enough to do small changes and about three months to take over entirely. Hopefully I'm not the only one here. in your case, the client's just forced to get high school work forever.

yeah, it's a real issue. but it's a real business risk whenever your business depends on a single supplier. it doesn't require a bus to lose a supplier.