Deliberate Git by Stephen Ball, recorded at Steel City Ruby 2013. To this day, I have my team sit down together and rewatch it every time we onboard someone new. It’s a fantastic level-set of commit message etiquette and purpose plus an overview of history tools.

Admirable effort, but punts badly on collision detection (in part 2 if one follows the link). One really needs at least some basic physics engine in even the simplest platformer. Hopefully that’ll be in a future post (box2d?).

Author of the article here. The future post won’t use a physics engine. Physics engines are bad for 2d platformers which aren’t necessarily physics based. I mean if the goal is to make a physics based game (think Angry Birds) then sure, but if the goal is snappy controls like Super Mario, then you’ll fight the engine more than it helps imho. Sliding platforms, elevators and similar things are quite the pain for 2d physics engines. Can’t speak for 3d though.

The next post will be using box colliders with raycasts. Once you have a 2d raycast (might even be enough to just have horizontal/vertical raycasts) you can do even stuff like slopes fairly easily, but the 3rd part most likely won’t get into that. I’ll probably cut it off when gravity/jumping works with static platforms.

Any tips are welcome though.

edit: Just to react to the comment :P

but punts badly on collision detection (in part 2 if one follows the link)

You’re right, I’m not really that happy with the state where its at. Initially I thought I’d make it a one big article, but keeping all the code in sync ended up being a nightmare, which is why I decided to split it up, cut it off at a point where something works, and do the next part in a more conscise manner.

Regarding keeping the code organised, for a multi-part article, why not create a git or mercurial repository - on github, bitbucket or gitlab for example. The code for each article can be in a separate branch which you can link to directly. You’d still need to manage changes made to an earlier stage but that’s pretty straightforward.

That’s not a bad idea, I thought about having a Gist of the finished code in each article. But the issue I was hinting at is with code snippets within a single article, not spanning multiple ones.

My approach to these articles is to have incremental samples with JSFiddle along the way, but all of those are separate snippets, and if I decide to change one thing I have to re-write a lot of the article. I mean the solution to this is easy, not to have 10 copies of the code in each post, but I feel like that’s just making it more difficult to follow along. I guess I should probably figure out the whole code first before I start writing to minimize the changes.

The next post will be using box colliders with raycasts. Once you have a 2d raycast (might even be enough to just have horizontal/vertical raycasts) you can do even stuff like slopes fairly easily, but the 3rd part most likely won’t get into that. I’ll probably cut it off when gravity/jumping works with static platforms.

This sounds great, I’d love to read that. It would be valuable addition to material already out there.

I think the method is describing what’s now called deliberate practice.

The better books will have relevant exercises - different from the material in the book - that test your understanding of the material and stretch it a little (otherwise known as deliberate practice.) Focussed exercises that stretch your ability somewhat beyond your current ability.

I mean people should really do that for themselves, but often you need some domain expertise and guidance to steer you in the right direction. The best teachers I’ve known do this, sometimes naturally, sometimes through experiences and sometimes through deliberate practice.

I’ve noticed lots of extremely smart programming language researchers get excited about Kanren or mini-Kanren and I’m not really smart enough to see why. It’s always been presented somewhat opaquely. This talk has blown my mind. One of those rare, true satori moments. Gonna spend some serious time re-reading The Reasoned Schemer to kick off 2018.

I think the trick is do it with something you know well to get a feel for solving the problem, then try it in X.

Generators are great though - I used Racket’s generator support to implement the infinite spiral of day 3. I should look into doing it using the lazy language to just use regular list operations but lazily. I tried using streams but the performance was much worse - the garbage collector got overwhelmed which probably means I’m not using them correctly.

Thanks - I’ve joined. I don’t think I’ve ever got in the top 100 for any of the challenges so a private leaderboard feels much more relaxed. I’m really not good at speed coding which isn’t a good combination with being hyper competitive :(

There’s an ongoing meta-discussion about the leaderboard, coupled with the release time of midnight EST. As the creator has mentioned, the envisioned scale was in the order of hundreds of competitors, now it’s up to 100K! The downsides of success.

In the end, I find it more enjoyable competing against yourself. There will always be someone more dedicated and faster, so the idea is to make the best solution you can make. My overarching goal is to solve every problem myself without resorting to hints from other users.

I started using StackOverflow in the beta days and have a hojillion points now. I’m still passively getting points from things I posted at the start.

However, points aren’t meant to say “you’re a good developer.” They’re meant to say “you contribute the kind of content that StackOverflow users want and therefore we trust you to keep doing so.” Editing privileges, etc, flow from that understanding.

It’s probably impossible for someone starting today to ever catch up to my score. But I’d argue it doesn’t matter, if they can get enough points to get privileges and keep curating the site. (If not, the site will suffer.)

I’m not sure if my success there is due more to the fact that I know how to ask questions on StackOverflow or to the fact of my high score. But for me at least, the demise of StackOverflow is greatly exaggerated, and has been for years.

The feeling you get when fixing a single issue has an unexpected global benefit is incredible.
Performance tuning in general is a mysterious art of balancing concerns, but database tuning in particular seems one of the blackest and so often overlooked.

The “identify language by location” issue is worse than anything account related honestly - I don’t use a google account, but if someone links me to a gdoc or google groups or something - I get Thai “in-page chrome” - all buttons, navigation etc is in a language I can’t read and barely speak.

Why? Because google has apparently decided that the Accept-Language header is a filthy communist plot and can’t be trusted.

This is more of an issue than the one described in the article because there is no obvious way on any google pages to change language. The author does understand the language shown to them it’s just not the one they wanted.

Why? Because google has apparently decided that the Accept-Language header is a filthy communist plot and can’t be trusted.

you really couldn’t in the mid-90ies when the internet started taking off (it was normally set to the language spoken by the people who made the browser).

Unlike other things from the 90ies (like websafe colors), this one seems to stick around, possibly because only relatively few people are affected by it and because it doesn’t impact how the site looks on the CEOs machine (unlike websafe colors).

By now Accept-Language is very reliable and should be treated as a strong signal. Much stronger than geolocation or other pieces of magic (which also regularly fail spectacularly in multi-lingual countries. Damn French Youtube ads here in Switzerland)

I mean - ok but that’s 25 years ago. You couldn’t stream video in the mid 90s, you couldn’t browse on a mobile device, shit you couldn’t do most of what people do now on the web in the 90s.

Even then it sounds like a bad idea - false positives where the users language isn’t that of accept-language means that they’re already managing to use a browser in a language other than their own.

Ultimately there are numerous better options that allow for potential mismatch of the browser language - but google uses none of them. They just base it on IP country code and fuck anyone who’s disadvantaged by it.

Funny that you should mention French ads on YouTube in Switzerland, no later than yesterday I was complaining on IRC that I only get Swiss German ads on YouTube and Spotify although I’ve been living in the French speaking part of Switzerland forever. It’s even more surprising because I don’t speak German or Swiss German and Google absolutely certainly knows this about me, the locale sent by my browser is fr-CH, etc.

And what a brain-dead decision not to make switching language the easiest possible UI interaction. At least some sites make it easy e.g. by listing the desired language in that language because guess what - I don’t know what the word English is in Arabic.

Not to mention the fact that there are MULTIPLE valid languages for many locations.
Worse still that so many other sites have blindly copied the idea. God help you if you travel or spend significant time where you don’t understand the local language and don’t want to be forced to log in (or even have an account).

I wrote a greasemonkey script to add a hl=en slug across all google locations and I dread the day they decide not to respect that.

I really enjoyed Haskell Programming From First Principles, which is about learning Haskell. Since we’re programmers, we can learn to wield both sides of the Force, Lisp AND Haskell. http://haskellbook.com

Likewise, Let Over Lambda is excellent and I’d argue even more mind bending than SICP and is about DSLs in CL macros, with a final project of making the language Forth a valid DSL in CL: https://letoverlambda.com/index.cl/toc

This is one of my pet peeves, and as an end user I feel web apps are one of the worst things to happen to computer software in a long time.

One place the difference is really noticeable is comparing Google Docs to the standalone Microsoft Office applications. Google Docs is missing 90% of the features and the interface is terrible. Whenever I need to create a document I jump to my Macbook and use Pages (which isn’t great either, but beats Google by a mile).

Another place where the difference is really noticeable is comparing Outlook in Office 365 to standalone Outlook. The features are very limited, there’s no ability to customize it, features that existed for 15 years in Outlook aren’t there any more, etc. It’s a mess.

Even the promise of cross-platform compatibility doesn’t work out in practice, because the larger web apps inevitably end up only working on specific browsers.

The most striking thing in my eyes is the fact that millions of dollars by multiple vendors have been poured into making the web a viable app platform and yet it hasn’t produced much in the way of complex apps that support professional workflows. Some may regard this as a feature (think 37 Signals aka Basecamp), but the fact that it Gmail and Gmaps remain the most technically sophisticated webapps out there is damning.

It just goes to show you that no amount of money and wishful thinking can paper over a conceptual impedance mismatch.

Sure, it’s a CRM. But it’s also an extremely pluggable environment. Third parties work to offer pluggable functionality, there’s “remote debugging” if your client has issues. There’s entire VMs working on huge sets of data.

It’s very close to being a Smalltalk-style image per user. The web-y-ness allows working with various data sources in interesting ways, allowing third parties to set up stuff within your own environment. It’s an extremely connected environment, and essentially impossible without the web.

I don’t imagine that it’s impossible for there to be a web application that is strictly better than a desktop one, for some given set of tasks, but by the time the platform mutates into one that enables said application, the platform will not look anything like the pig’s breakfast we see today. It won’t be within a million billion miles of elegant, but somehow more layers of abstraction will be piled on and we’ll muddle through, because Google’s gotta keep those ad dollars flowing.

For all the advances, the web has become an execrable platform commandeered by ad agencies and data brokers.
I keep javascript locked down pretty tightly using NoScript whitelisting sites I visit often, mainly to avoid a lot of the egregiously bad ideas.
Some pet peeves - scroll jacking, unreadable, ultra low-contrast designs, having to load multiple javascripts frameworks just to read some fucking text with a few pictures.

Currently I’m working on a web-based contact center and communications platform that supports routing of calls/chats/tweets/email/voicemails to agents, review and quality analysis of agent interactions, real-time stats in the browser for contact-center management, workforce planning, agent script editing and display, telephony administration, company directory, chat & telephony for non-contact-center users, and probably a bunch of other stuff I’m forgetting. It’s nothing if not complex and professional.

Like most projects, all of these applications had their fair share of issues, but the problems were not specific to the web, and would likely have been issues in any desktop application built by the same companies. Mismanagement of projects, insufficient time to deal with technical debt, poor business decisions, legacy tech that must be accomodated - those problems are not specific to any platform.

I’m speaking mostly from personal experience, but it may be possible that apparent lack of “complex webapps supporting professional workflows” is primarily due to those apps existing in niche markets that don’t get publicity in the same way social media platforms and end-user tools like email/office software do.

Gmail and Gmaps remain the most technically sophisticated webapps out there is damning.

In the consumer space, sure. But there are some reasonably complex enterprise-y web apps. MS Office (and even the Mac productivity suite) historically weren’t “free” applications, so it’s not entirely fair to compare them against “free” web apps.

That being said, web apps are mostly crap and I use them because of factors other than quality (convenience largely).

That reminds me of a project I had once where the old system had a limit of 256 columns per table. The schema was dreadful - full of tables with sets of columns <name>1, <name>2, <name>3, …
It hit the 256 limit pretty early on so by the time it came to transfer the data there were sets of tables <table>1, <table>2, <table>3, … I found one entity with over 1200 columns.
The best part was the new system had to match exactly the schema for the old system which makes the transfer relatively easy but … sobs… sadly.

Every time I talk to a recent grad I hear a vadriation of the phrase “I know how to code, I can code in anything”.

The other way this fails is languages not derived from ALGOL. I had this attitude a few years into programming when I knew HyperTalk, Visual Basic, PHP, and Python. I was corrected by diving into SQL, assembly, PostScript, Haskell, esolangs…

Algol-derived is a big one, but bigger is whether the language assumes mutable state is the default state of existence, and immutability is either inexpressible or tacked-on.

For example, Lisp isn’t Algol derived. However, Algol and Lisp are more similar than different on the level of how data moves through a program, because they both assume mutable state is the default, and an unmarked default at that, with relatively weak (if any) support for immutable state tacked on later, if ever. The essential similarity between Lisp and Algol is really pointed up by Scheme, on one side, and Python, on the other.

OTOH, declarative languages, such as SQL, and functional languages, such as Haskell, really break brains because their model of data is very different. Spreadsheets are another data paradigm: Automated data flow from cell to cell, in an implicitly parallel environment.

I was surprised when I learned that XSLT is a pure functional language, mainly because I didn’t find it that hard at all (my website is generated from an XML file via XSLT). Verbose, hell yes. Hard? Not really. But in retrospect, I can see the functional nature of XSLT.

APL derivatives, Lisplikes, Autohotkey, LabVIEW, Prolog, Excel… I’ve wondered what it would be like to construct a list of “basis” languages that cover all of the different forms of programming, and if such a list is even possible.

Where’s COBOL and BASIC in this? They each had huge impact. Main concept is that programming could be almost as easy as pseudocode for basic applications. A flawed idea but all kinds of people did productive things with it.

I think I omitted BASIC because it doesn’t influence enough other languages. Wikipedia lists Visual Basic, VB.Net, RealBasic (Xojo), and AutoHotKey. There aren’t any interesting crossovers with e.g. Lisp or ML.

Darn, I don’t know how I overlooked it. Apologies. I guess BASIC could be omitted on that criteria as COBOL already did the Code Like English concept. It’s overall a nice tree of languages. +1 for text art. :)

My attempt at that was to look at Wikipedia’s list of programming paradigms to find key languages for each. Filter out those that dont have FOSS implementation. Pick the best of them as far as stated complexity vs learning materials available. And you then have close to a list like what you’re looking for.