Category: Uncategorized

Dynamic programming is one of those things that is deceptively simple to the novice but ridiculously obvious to the well-trained, meaning what it solves for requires an understanding of execution context and runtime that is neither merely theoretical nor merely practical. Like so much in computer science, knowing what it is is different from knowing how to do it, which are both altogether different from being able to do it.

The point of dynamic programming is efficiency: improving runtime by discerning and solving subproblems of algorithmic repetition; that is, where and how does an algorithm repeat itself? This is especially relevant to recursive algorithms, where the computer is often having to do the same computation in multiple recursive branches. The classical example is the Fibonnaci sequence; below we have the standard recursive function and the memoized version side by side.

While JavaScript notoriously has difficulty with large numbers, we see at the rather small handleable-by-JS input of 25 a performance difference: 2-3ms for the non-memoized version and a blazing 0ms for the memoized version!

Two things to note: this kind of memoization is called “top-down” — for some it simply means what we mean by “memoization” — and a key functionality in JavaScript making it possible here is what is called “pass by reference”; that is, all objects are passed to functions by reference, meaning in every recursive call (two calls per one call) the same memo object is read and modified, making it a register of sorts we can use to see if we’ve already performed a certain computation. As stated earlier, this is the key to dynamic programming: improving algorithmic time complexity by omitting redundant steps.

Here’s some REPL code I just threw together that prints the binary representations of all numbers between 0 and num formatted such that every representation is left-padded with zeroes to the largest bit:

Learning data structures and algorithms can be tricky; memorizing what O(n) means is one thing, actually knowing it such that you intuitively understand its meaning, i.e., can see what it means, is another.

Along these lines, I will here discuss how I see O(n!).

O(n!) says that an algorithm’s work will increase with relation to input size n at a rate of current iteration volume * prior iteration volume.

Think of it like a loop where the number of iterations of the loop equals n!, i.e., an input size of n will mean n! iterations.

This is how I see rate of change as being central to the concept of algorithmic complexity represented by big O: reliably predicting the the rate at which an algorithm’s complexity increases or decreases means, given an input size of n, we will know n’s neighboring operation volumes, making any specific increases or decreases to n irrelevant to the complexity, which is why we drop constants.

Week 1 of senior phase down and I’m right in the thick of our first project: building an ecommerce site, i.e., an Amazon clone. So how does one go about doing this? Well, one does not; that is, we were put in teams of four to tackle this project, for which we were given just over a week. Working in pairs and groups has been heavily emphasized, and I completely understand why: you can only understand a problem once you walk through it, and the best way to walk through a software problem, which is fundamentally a language problem, is to talk through that problem. That, and building an ecommerce site by yourself, in a week, would be pretty intense.

So how did we begin? After initializing a git repo and waffle workflow manager, my team began with an ERD: Entity Relationship Diagram. This helped us walk through our database schema design at a high level prior to writing any actual code. We found we would need (at least) 4 models: Users, Items, Orders, and Reviews. Relationships? Users have many Reviews; Orders belong to many Items; Items have many Reviews and belong to many Orders; Reviews belong to many Items and belong to many Users (*phew*). We found this required a join table: one between Orders and Items, and this relationship proved to be tricky on the backend.

We are using Sequelize, and in order to set the many-to-many associations we had to promisify the setter; however, Sequelize has poor support for this kind of action and does not allow duplicates! So we are currently stuck only able to accept a quantity of 1 per order per item. As it is technically not blocking our MVP, and we have only a week to have a deployable site, we have moved on, leaving that for a (theoretical) circle-back at a later date. In a “real world” scenario, we would have consulted with the product team to confirm MVP requirements, and if it was required we would have had to reconfigure our models, but as it is, we have plenty of other cod to catch.

Another not-a-bug-but-a-feature came up yesterday as I created our cart logic. To do so, I utilized Passport.js (only local strategy for now, OAuth is on the docket for this coming week) and cookie-session (saves session in a client cookie as opposed to backend session store). Passport serializes the user as they login, i.e., places them on the req object for easy access, and cookie-session creates a session and sends the client a cookie in its first response. When the user adds an item to their cart, the server creates a cart array on the session, adds the item to it, and sends it back to the client. The cart then lives on the client side and can be edited by the user at will, whether they are logged in or not. So what’s the problem? Well, if more than one user uses the same browser to shop on our site (I’m looking at you, families), and one of them leaves their cart, even if they log out, the next user will inherit that cart information! Again, similar to the earlier issue, this is non-blocking in terms of MVP but it is something we would definitely need to fix for a production application. (Easiest fix would be to persist the cart in a new model and wipe the cart on user logout, but this would only work for logged-in users.)

Both of these bugs were awesome learning experiences in digging into online conversations on Github and SO. Next week, I’ll tackle OAuth, deploy the site to Heroku, and lastly move on to my hackathon project, a browser-based synthesizer app. Stay tuned!

As I transition into the senior phase of my JavaScript studies at Fullstack Academy, I’ve been completely Math.floored by the programming power lurking beneath my fingertips, and it has hardly left me any time to reflect or breathe. But alas! I must breathe, and reflect on these past weeks. So what have I been up to? Here’s a brief list:

Implemented abstract data types, e.g., linked lists and hash tables

Wrote a JavaScript promise library (!)

Built several full-stack JavaScript applications, utilizing the following technologies:

Node.js (release JavaScript from the browser!)

Express.js (build an HTTP server in 10 seconds!)

Sequelize.js (Object-Relational Mapping for Node!)

React.js (dynamic and astonishingly lean front-end!)

Redux (bulletproof state management!)

As I make my way through senior phase here at Fullstack, I will be pausing along the way to go a little deeper into facets of each of these amazing technologies. Stay tuned!

First off, good news: I’ve officially been accepted into Fullstack Academy and will be joining the next immersive cohort. I’m extremely excited for this opportunity to become a software engineer under the tutelage of such a fresh and cutting-edge group of developers. I’m fully committed to a lifetime of building and continually improving the information superhighway I’ve come to take for granted — the internet.

And what is the internet? If I bracket ontological analysis — one that would begin by asking the implicitly prior question of what I mean when I ask for “what” something “is” — I am tempted to answer: 1s and 0s. Binary code.

Similar to my prior post, I’ve recently had another “A-HA” insight regarding CS fundamentals, this time involving the binary encoding of data values in JavaScript. Reading through MDN’s JavaScript guide, I came across bitwise operators. While it was not the first time I had come across these operators, it was (as with so much in programming) the first time I was able to think about it clearly enough to understand how little I understood it. Cue deep dive into binary.

Given a 32-bit value space, since each bit can only be one of two values (On or Off, 0 or 1), this value space can be used as a base-2 counting system to construct, if unsigned, all integers from 0 to 2^32 (0 to 4294967296), and if signed, all integers from -2^31 to +2^31 (-2147483648 to +2147483648). If you start counting, a pattern emerges:

2^5

2^4

2^3

2^2

2^1

0

0

0

0

0

0

1

0

0

0

0

1

2

0

0

0

1

0

3

0

0

0

1

1

4

0

0

1

0

0

5

0

0

1

0

1

6

0

0

1

1

0

7

0

0

1

1

1

8

0

1

0

0

0

9

0

1

0

0

1

10

0

1

0

1

0

11

0

1

0

1

1

12

0

1

1

0

0

13

0

1

1

0

1

14

0

1

1

1

0

15

0

1

1

1

1

16

1

0

0

0

0

Notice the rightmost digit: it is always either a one or a zero, and it regularly alternates with each count, i.e., you know that given non-negative integer x, if its binary representation ends in 1, x+1’s binary representation will end with 0; likewise will x-1’s. Same with the inverse case; if x’s binary representation ends with 0, x+1’s and x-1’s will both end with 1. This gets complicated in a signed environment, but the point here is to notice the logarithmic distribution of the 0s and 1s.

Moving to the second digit from the right, notice it now oscillates at a frequency of *2 the prior digit; that is, instead of alternating between 0 and 1 each count, in the second column, we get 2 zeros, 2 ones, 2 zeros, 2 ones. In the third column, 4 zeros, 4 ones, 4 zeros, 4 ones. Thus the column number (counting from right to left and starting at 1) actually corresponds to the log value applied to 2 which generates the frequency of 0s and 1s in the given column.

In column 4, then, a “cycle” is 2^4 “long,” meaning there will be (2^4) / 2 zeros then (2^4) / 2 ones. This means, for any given integer x within the limits permitted by the 32-bit binary scheme we are working with, you can zoom in on its binary representation by first determining where it’s leftmost 1 is — the breakpoint 2^n value where, if subtracted from x, leaves a remainder which is less than that of the 2^n subtrahend. You can then determine the subsequent digits by checking if the remainder of the subtraction is greater than or less than the next base-2 value; if it is less, it’s a 0, if it is greater, it’s a 1. Either way, the remainder is carried over to the next 2^n until 2^1 is reached. At that point you would have a binary representation of your given integer.

This insight helped me start thinking outside the base-10 box, and again, always exciting to feel the power of knowledge. Until next time!

As I gather the data needed to create the fourth and final table for the Flock app – which also happens to be the table with the most foreign keys, being the table which links all the other three – I am re-reading the excellent (and free!) Eloquent JavaScript. It is extremely concise, and the integrated coding fields to try practice problems or just tinker with examples is just awesome.

Not being a CS major, my way into code and the ways of programming has been iterative and, well, circular. From what I’ve gathered, this is not only not unusual it is instead “the way it is.” To this point, in the introduction to EJ, Haverbeke presents a primitive program that finds the sum of the numbers from 1 to 10. In my own words, this is how the program works:

Assign the number 0 to location 0. (This is our sum.)

Assign the number 1 to location 1. (This is our count.)

Assign the value of location 1 to location 2. (This is our test.)

Subtract 11 (the upper limit of the range + 1, so if we were looking for the sum of the numbers between 1 and 100, this would be 101) from the value of location 2. (This establishes whether or not we’ve reached our upper limit.)

If it is true that the value of location 2 is now 0, go to step 9. Otherwise proceed to step 6. (If it is 0, we’ve summed all needed digits and should stop and simply return the value of location 0, or step 9.)

Add the value of location 1 to location 0. (Our sum-finding action.)

Add one to the value of location 1. (This advances our count. The sum-finding action should only run as long as the count is less than or equal to our upper limit.)

Repeat steps 3 through 7 until 5 is true.

Return value of location 0. (It’s 55.)

Again, not being a CS major – an English major with a Master’s degree, in fact! – wrapping my mind around this very simple program was a bit of an “A-HA” moment. If there is one thing I can say I absolutely enjoy about programming is that feeling of progress; the insight is real, and things you learn make it possible to learn more things in an almost exponential way. Seeing the results of learning to code means feeling the power of programming – until next time!