EA Sports' FIFA is one of the most successful video game franchises of all time, and much of its success is well-deserved: nothing else quite captures the finesse and exuberance of the world's most popular sport. But the game has consistently adopted some dark patterns in its online competitive component

EA Sports' FIFA is one of the most successful video game franchises of all time, and much of its success is well-deserved: nothing else quite captures the finesse and exuberance of the world's most popular sport. But the game has consistently adopted some dark patterns in its online competitive component that make my relationship with it love-hate. These patterns act as a case study in how picking the wrong target to optimize for can ruin the experience for your users.

Dynamic Difficulty, the Skinner's Box of FIFA

EA has vehemently denied the presence of a "script" to maximize for engagement in FIFA Ultimate Team or Seasons. Yet their developers have published a paper on just that subject, techniques to maximize in-game engagement. Specific techniques mentioned are modifying in-game attributes such as player speed, accuracy, etc, essentially forming an algorithmically determined concept of game momentum. EA has a huge desire to keep you playing: in-game purchases from Ultimate Team made up 28% of their entire revenue last year, so for them the time you're syncing into the game really is money.

BF Skinner's experiments on conditioning in the 60's determined that being unpredictably rewarded results in the highest level of behavior for the conditioned response. So EA's goal here is to unpredictably reward wins to maximize the desired outcome of gamers continuing to play and therefore continuing to spend. Many FIFA players will attest to the idea of "momentum" where your wins are chunked together followed by a series of losses. Of course player skill has a large impact on outcome, but to some extent there is a feeling that certain players simply move faster or are more accurate during these winning streaks. With the sheer number of variables that can be pulled or slightly manipulated during a game, the games' manipulation would be very subtle.

Lagging rewards the lagger

Most online matches in FIFA are one-on-one. Instead of a server determining game speed as in FPS and most other competitive online games, EA FIFA will slow down the game speed to the level of the player with the slowest Internet speed. In an online game like FPS, lag is a severe handicap: from the perspective of the lagger, you won't even see the player that kills you until you're already respawning. But in FIFA, the game speed itself will slow down or speed up based on the least reliable connection. As a result, the experience even for a player with fast connection is a slow-motion game, at times randomly speeding up or having input lag due to the slower connection.

Which player does this benefit? The one with the most experience playing games at that speed! If you're used to playing the game at full speed you will be thrown off by extremely varying spikes of slow and fast gameplay. If you have gotten used to this kind of play due to a poor connection, you've trained in a completely different game essentially, one that you force onto the other player with less experience in it.

There's a more insidious side to this: there have been whispers in the FIFA community of gamers who specifically throttle their FIFA connections to play this way and gain an advantage. I haven't found proof of this, but from my experience this would be an effective way to gain a slight upper hand in every online match.

Pacing rewards annoying opponents

This is more frequent at lower levels of play, but you'll see a specific type of player in FIFA pretty often: the kind who abuses pace and simply runs down the side with their fastest player on every possession. There are pretty effective ways to shut this kind of player down but it results in repetitive defense that is simply boring. The worst part is when you miss a tackle and Messi or Salah gets through after the twelth attempt by your opponent to pace through your defense. After a defeat like this you get the feeling that the game is much less about actual player skill than it should be.

I love the feeling of getting a difficult win in FIFA online, and I've sunk more hours into the game than I care to admit here. But there are certainly frustrating issues with the game, some of which I've outlined here and some that are more generic to the big AAA online games like loot boxes. I like FIFA, but some of these issues are ones that have a clear motive by EA to optimize for things which are great for their bottom line but terrible for the average user.

]]>

My motivation for developing QuickQ, a knowledge sharing platform, is to eliminate knowledge silos within growing teams. One of the most common sources of knowledge siloing is from microservices. I don't oppose microservices - once you're past a certain scale they have their place - but I have noticed they

My motivation for developing QuickQ, a knowledge sharing platform, is to eliminate knowledge silos within growing teams. One of the most common sources of knowledge siloing is from microservices. I don't oppose microservices - once you're past a certain scale they have their place - but I have noticed they can isolate teams and reduce knowledge sharing. Specifically, I believe the service per team approach is an anti-pattern and leads to an increase of single failure points in your org.

Let's take infrastructure as an example: a common practice as your team grows is to assign a group of engineers to infra/devops and other internal tooling. Typically, services that wrap common developer tasks such as spinning up a new Kubernetes cluster or provisioning a database are deployed. These tools can allow for greater productivity but they can also encourage developers to rely on a less-sophisticated abstraction of an underlying tool instead of connecting their knowledge of the application requirements with the infrastructure that runs it. If your cluster service removes the ability to fine-tune and understand the underlying Kubernetes concepts, then this knowledge has been siloed into a single team that will always be fighting to stay ahead of the next incident.

The flip side of this approach is an organization that empowers developers at every level to dive deep into devops tooling and add functionality to your internal tools as needed by their services. But this only gets you so far: now instead of having a service per team, you're creating a platform per team, with each team contributing to the underlying infrastructure of their platforms but siloed from each other. Without a culture of collaboration your teams will inevitably work as islands, missing opportunity after opportunity to integrate and create a unified application. Instead, common requirements between teams will result in redundant work and wasted time.

Realtime communication tools such as Slack can help with the problem but only if there's a culture of knowledge sharing within the organization. Empower your product and engineering folks to make announcements and ask questions, and importantly, record the answers to these questions in a place that everyone can access. Engineers don't want to feel enclosed by a single service: they want to be empowered to create and contribute at any layer needed to make a beautiful product.

I created QuickQ, a knowledge sharing platform and Slack app, as a way to collect knowledge in the place where it's actually being shared: directly within your communication app. If the barrier to sharing knowledge becomes so low that as soon as a question is answered it gets recorded for future devs to search through and easily find, knowledge silos can be eliminated. It's free forever for up to five users so give it a try.

I don't think an application alone can fix knowledge siloing. It certainly requires a shift in culture and procedure. But so often valuable knowledge gets buried in the firehose that is your communication app. QuickQ can help prevent this and get your org started on the path to creating strong, interconnected engineering teams.

]]>

In the rapid-growth phase of a startup, one of the most challenging problems is ensuring that your growing team has access to the required knowledge to properly and efficiently do their jobs. This is an issue in every company, but it's particularly difficult for a startup for several reasons. With

In the rapid-growth phase of a startup, one of the most challenging problems is ensuring that your growing team has access to the required knowledge to properly and efficiently do their jobs. This is an issue in every company, but it's particularly difficult for a startup for several reasons. With a sudden and massive increase in hiring, the core team finds itself juggling two roles, maintenance of the platform and onboarding/mentorship, and becomes overwhelmed. The technologies and techniques used are also in flux as the platform is forced to scale. And employees are often not incentivized to share knowledge, or in the worst cases are implicitly incentivized against knowledge sharing.

While the first two issues can largely be solved with the proper tooling to document and share knowledge, the last issue cannot be fully solved with technology; the solution must include a strong culture of rewarding knowledge sharing and preventing knowledge siloing.

Make sharing knowledge easy

To empower your employees to share knowledge, you should make it as easy as possible for them to document information relevant to the company, find the right information when needed, and quickly answer questions without getting overwhelmed. There are countless tools to collect and categorize a team's knowledge, but from my experience many of these solutions fall short when it comes to actually finding and sharing the information contained in them.

As a software engineer at a rapidly-growing startup, I noticed that tools like Confluence often went unused or became outdated, and as a result the same questions were asked and answered on Slack time and time again. That's part of the reason why I developed QuickQ, a web platform and Slack app designed to solve this "last mile" issue of knowledge sharing. By making it easy to collect and share knowledge within the main communication tool of the company, my hope is that knowledge sharing becomes more collaborative, more real-time, and less of a time sink.

Reward knowledge sharing

Regardless of how easy it is to share knowledge, it won't become a part of your company's culture if it's not properly incentivized. Knowledge indexing and sharing requires careful thought, time, and empathy for those who are learning, yet it is often ignored or taken for granted. Managers should explicitly encourage knowledge sharing, account for it when planning work, and offer praise when they see it happening. Consistently answering questions related to technical processes or onboarding saves the company a massive amount of time and effort, ultimately affecting the bottom line.

I'm currently working on a detailed admin dashboard for QuickQ that will track who is answering the most questions and sharing these answers when needed so you can easily identify the greatest contributors on your team. But technology alone cannot solve this problem: part of a knowledge sharing culture must account for contributions when determining employee compensation and raises.

Discourage knowledge siloing

The problem of knowledge silos is perhaps the most difficult to solve, and while tools like QuickQ certainly help, the most effective solution is a culture of knowledge sharing. Part of this is that managers should be adept at identifying knowledge siloing and explicitly discourage it. For example, when one engineer is always fixing the same issue that crops up in your production environment, ask yourself if their process can be easily documented and shared to new hires. If you instead reward the engineer for fixing the same problem a fifth time, this is implicitly encouraging siloing and may be preventing a more collaborative approach to permanently addressing the issue. Other strategies for discouraging siloing include promoting cross-team collaboration and introduction of shared goals and metrics.

I became interested in addressing the problem space of knowledge sharing after seeing patterns and pain-points in a rapid-growth startup. If you're curious about a new and evolving approach to knowledge sharing, check out my new platform, QuickQ.

]]>

Note: As was pointed out in numerous comments on Hacker News and here, this approach is problematic for a number of reasons. Transferring a secret key between devices in this manner can leave you vulnerable to serverside exploits and increases the likelyhood that your key will be exposed. This method

Note: As was pointed out in numerous comments on Hacker News and here, this approach is problematic for a number of reasons. Transferring a secret key between devices in this manner can leave you vulnerable to serverside exploits and increases the likelyhood that your key will be exposed. This method also doesn't allow key revokation or forward secrecy. The goal of my project, Mentat, is to strike a good balance between privacy and convenience/features; it's an early-stage project and I'm still trying to find that balance. If your goal is ultimate privacy, there are a bunch of projects already out there that are better-suited for your needs: check out Signal, Matrix, etc. That being said, what follows is the original article...

When it came time to implement 2FA in my open-source project Mentat, I wanted to try something a little different. As an end-to-end encrypted chat app, asymmetric encryption was already an important aspect of the platform, and was easy enough to implement using OpenPGP.js. When a user signs up for the platform, a keypair is generated and the public key is saved in the database as part of that user's identity. But an issue arises when the user wants to sign into a different device: how can the user's private key be transmitted in a way that doesn't reveal their credentials to the server? As it turns out, I was able to solve this issue and add a second authentication factor in the same step.

Signing in on Mentat starts with the user inputting their email and password on a new device. When this occurs, the device will generate a brand new keypair and send the public key to the server. The server will check if this public key matches the one stored for the user, and this check will fail because a keypair has already been added as part of the signup process. The user is then shown a wall explaining that the device needs to be authenticated:

Meanwhile, a request will be sent to all of the user's previously-authenticated devices. The request will contain the public key of the new device and will ask if this request should be accepted:

If the request is accepted, the authenticated device will encrypt the user's private key using the new device's public key and transmit this packet to the new device. The new device will decrypt the user private key and replace its keypair with the valid keys, thus authenticating this device and receiving the user keypair at the same time. With the valid keys, the new device is able to decrypt group chat messages received from the server and send new messages under a single identity between devices.

Some work still needs to be done to increase the security of this process. For example, the server (or another device) should verify that the new device truly owns the private key before lifting the 2FA gate on the new device. This can be achieved by simply signing a message and having this signature verified. Additionally, the request could list some details, including model or OS, of the new device requesting access, in case a fraudulent request was sent.

When I announced my group chat application, Mentat, on some forums a few weeks ago, a common question was "Why would I use this over the apps already out there?" Fair enough; while I've derived a lot of personal enjoyment from designing this side project from the ground

When I announced my group chat application, Mentat, on some forums a few weeks ago, a common question was "Why would I use this over the apps already out there?" Fair enough; while I've derived a lot of personal enjoyment from designing this side project from the ground up, that alone can't be expected to attract users. What I'd like to outline in this post is the key feature that sets Mentat apart from other group chat platforms, its intended userbase, and a set of design choices that I consider crucial to the success of an app like this.

The key feature: Tag all the things

Realtime chat solutions seem to fall into two broad categories: business and casual. With a business-oriented app like Slack, users and topics are divided into clearly defined groups or channels. This segmentation of the conversation allows for businesses large and small to keep their chats organized and as free of noise as possible. Importantly, it also serves to exclude users from conversations that they aren't needed on or shouldn't see.

Casual platforms like Facebook Messenger on the other hand are intended to be inclusive, linear conversations where every group user can view every message. The intended userbase is a group of friends rather than coworkers. While Messenger and others have crafted great features for linear group chat, what seems to be lost on the road between business and casual solutions is categorization of messages: a fantastic realtime chat app can fall apart when a user wants to view past messages by a particular topic or category, even with good built-in search features.

Mentat is an attempt to bridge this gap. With Mentat, the goal is to have categorization of messages be as fluid and natural as possible while still maintaining the inclusive community of a casual chat app. It achieves this with message tags (think Twitter hashtags). If you're posting a funny meme in your group chat, for example, you can post the link followed by #meme to have the tag immediately present on the posted message. Alternatively, after you've posted the meme any member of the group can tag your message for easy retrieval later. The conversation continues linearly, but if you're interested in viewing this meme or one posted in the past, you can select the tag and immediately see all past messages tagged #meme. In the same way, a message can be tagged with multiple categories and you can select multiple tags to see an ever-growing list of messages relevant to your current focus.

Mentat is intended for groups of friends, but I can also see a use-case for small teams. In the latter case, creating "channels" wouldn't be this laborious manual process, they would be created on the fly by embedding tags. It eschews the exclusive aspect of traditional channels in lieu of a tight-knit, deeply-categorized linear conversation.

A solid foundation

Tagging is a key feature of Mentat but certainly not the only one. I view some features and design choices as crucial to the success of a chat app:

Privacy. Users don't want the feeling that someone is staring over their shoulder as they type. A chat app should end-to-end encrypt its messages; I use OpenPGP.js for this.

Open-source. Some may view this as optional. But how can you gain user trust that you're not harvesting data and that conversations are indeed private unless they can see the source and optionally host it themselves? You can view the source here.

Link previews. Users want to see where a link leads before clicking. In the case of an image link, the preview suffices without having to click in most cases. This feature requires the server to ping the link first; if you don't want the server to know your links, you can turn this feature off.

Notification system Users want good defaults for notifications. I used Web Notifications for this, and I'm planning push notifications as the platform expands.

Thanks for reading. If this sort of thing interests you, check out the live demo here or the Github repo.

]]>

At work I've recently had to build and release an Android app many times over, each time with slight bug fixes and updates, for a client to test. I'm hoping to incorporate Fastlane and/or a CI tool to automate this soon, but in the meantime I'm burdened with the

At work I've recently had to build and release an Android app many times over, each time with slight bug fixes and updates, for a client to test. I'm hoping to incorporate Fastlane and/or a CI tool to automate this soon, but in the meantime I'm burdened with the banal task of building an APK, signing, optimizing, uploading, and repeating many times over. Although it's the same boring process every time, the amount of software tooling involved is staggering: Gradle, jarsigner, zipalign, and since this is a React Native project, node, yarn, Webpack... you get the idea. Each one of these tools produces dozens or hundreds of lines of logs as well, so for better or worse you learn to ignore the vast majority of tooling output. On this note, I just recently noticed a warning which I had never previously paid attention to:

After jarsigner does its thing, it warns that my certificate expires, and that this may start affecting users... in 2045! (or after revokation). What a vote of confidence that my app will not only be initially successful but users will be signing up in droves to verify the APK's signature 30 years from now. Thanks jarsigner!

Do you have a funny or interesting example of software tooling being relentlessly optimistic? Leave a comment!

]]>

Learning Erlang, and the OTP framework in particular, has given me a better understanding of distributed systems and their fundamental building blocks. Before this, my experience with distributed systems was solely in the realm of Kubernetes, which we use at work for deploying scalable, distributed web services.

Learning Erlang, and the OTP framework in particular, has given me a better understanding of distributed systems and their fundamental building blocks. Before this, my experience with distributed systems was solely in the realm of Kubernetes, which we use at work for deploying scalable, distributed web services.

I was introduced to Erlang through Elixir. The Phoenix framework leverages Elixir to build performant, functional web applications and it often comes up in the Rails community as a better-performing alternative to Ruby. Elixir depends on the decades of development behind the Erlang VM and in many ways is just syntactic sugar on top of Erlang/OTP. For that reason I wanted to get a better understanding of the underlying technologies so I picked up the book you see above, Erlang And OTP In Action.

Building Blocks of a Distributed System

In Erlang one can take distributed computing for granted. To start off, all code within Erlang runs in processes. You can look at a process in Erlang and Elixir as a basic building block of the language much like one might view a class within an object-oriented language. An application is thus a tree of interconnected processes. Because processes are so native to Erlang, all of these individual processes can automatically be spread across the available cores of the system running them. And because processes are each isolated environments which depend on a built-in system of message passing, the Erlang environment can treat a distributed network of systems the same way it treats a single one. This may sound like magic, but in reality it is leveraging the work that experts have put into building possibly the most scalable environment currently available for development.

Although Kubernetes may immediately seem to solve a very different problem than Erlang (for one, Kubernetes isn't a programming language), there are some important similarities when it comes to the structure of its distributed implementation. Much like Erlang uses the basic building block of a process, Kubernetes uses the container, or more generally the pod, as a base. Every node in a kube cluster can run one or more pods, just the same way that every node in an Erlang cluster can run one or more processes.

Comparing APIs

If we want to discuss implementation details it would be helpful to introduce OTP. OTP is a built-in framework within Erlang for building applications. Building on top of the process-based system, OTP (Open Telecom Platform) offers a further abstraction known as a GenServer. GenServer adds message sending and receiving, storage, TCP, and a ton of other functionality to a process, allowing you to spin up a server that reacts to input and performs useful functions very quickly. Furthermore, OTP adds the concept of a supervisor, a parent process that manages the lifecycle of the child GenServers. In cases where a child process fails unexpectedly, the supervisor can restart that individual process without having it affect the other process environments.

Kubernetes leverages many of the concepts we just discussed. To name a few, Kubernetes has an ApiServer running on the master which acts as the supervisor for its cluster. Like OTP, this API will ensure that the pods which are running are healthy based on their underlying deployment (very similar to a child specification in Erlang). And if a pod goes down unexpectedly, it will be restarted based on the restart strategy; same with OTP!

To wrap up, it's evident that much of the work that went into Erlang has inspired the work of Kubernetes. You can look at Kubernetes as a language-agnostic implementation of the same kind of distributed system that's found in the Erlang environment. It's inspiring to see the way open source projects have built, incremented on and in many cases directly inspired each other over the years.

]]>

A while ago I published redux-remote-datatable. It's a React and Redux-based table for serverside-processed data, and it looks like this:

At the request of a GitHub user, I added an example implementation of the API written in Ruby on Rails. You can find that here. This is the server I

A while ago I published redux-remote-datatable. It's a React and Redux-based table for serverside-processed data, and it looks like this:

At the request of a GitHub user, I added an example implementation of the API written in Ruby on Rails. You can find that here. This is the server I used to capture the above gif.

Over the weekend I made another API implementation in an effort to showcase the ease of switching backend services for the component and to learn a new framework. I chose Phoenix (Elixir), and the result can be found here. Both this and the Rails project are fully dockerized and utilize docker-compose.

I really enjoyed my development experience in Phoenix. Rails knowledge ports easily to the new framework and I find myself being fairly productive in it already. In terms of performance, I can only speak to development but I recorded 3x faster speeds with Phoenix, with a page for the table rendering in 200ms compared to Rail's 600ms. This made the datatable noticably snappier.

By the way, props to the congress-legislators GitHub project which the seeds for both APIs depend on.

]]>

One of my most frequently-used open-source tools is datatables.net, a jquery-based interactive table with dynamic sorting and searching. I sought to bring the simplicity of that project's serverside API to React, using Redux to handle state changes. The result is redux-remote-datatable.

One of my most frequently-used open-source tools is datatables.net, a jquery-based interactive table with dynamic sorting and searching. I sought to bring the simplicity of that project's serverside API to React, using Redux to handle state changes. The result is redux-remote-datatable.

I limited the scope solely to server-processed data since this is primarily how I used datatables in the past. Take a look at the project readme for more info or to get started.

]]>

tl;dr: A 10x programmer does not necessarily know one language/framework ten times as well. More likely she uses 10 programming tools just as well as others who master only one.

Look up programmer on Indeed and you'll immediately see how fragmented CS jobs are. It's the nature of

tl;dr: A 10x programmer does not necessarily know one language/framework ten times as well. More likely she uses 10 programming tools just as well as others who master only one.

Look up programmer on Indeed and you'll immediately see how fragmented CS jobs are. It's the nature of the field: each discipline could perhaps take multiple lifetimes to completely master, so of course there's no one who "knows it all". But in this post I'd like to make the point that titles have become too fragmented, and that a 10x programmer is really simply anyone who is able to cross these boundaries to find the right tool and employ it proficiently.

Spend a decent amount of time in this field and you'll run into the idea that overusing a single tool/framework can make one myopic. I've certainly noticed this with Ruby on Rails.

10x: "Why use RoR for an API-only app with no database when you could go with Sinatra? Or even serverless?"

1x: "Well, those other options aren't RoR".

The difference here is obviously not that 10x is so much better at Rails than 1x. Rather it's that 10x has the knowledge that there are better solutions. This distinction becomes even more important when you're crossing languages, because languages are designed to solve, in some cases, massively different problems depending on their architecture. In the real world, problems don't conform to a particular framework or even job title. Therefore, 10x crosses these language barriers when necessary, even if she has a preferred language.

In my field, devops is just another tool in the toolbelt. But also it's a whole job title because, like I mentioned in the beginning, every discipline has a massive knowledge depth. A 10x programmer understands that tools like Kubernetes and serverless aren't magical realms for devops to deal with, they are simply tools that abstract away concepts that she already knows. Kubernetes abstracts away distributed computing just like nginx abstracts away http.

Finally, I don't believe 10x is some exclusive club. In many cases, a 10x programmer has put ten times the effort/time into learning these tools. And so the other side of that is any 1x programmer can become 10x with time and effort. If you're able to become proficient at one modern tool, you can certainly become proficient in others. And that's the path to advanced, senior, 10x, guru, or whatever you want to call it.

]]>

If you've ever played NetHack, the beloved roguelike, then you'll be very familiar with the above image. It's the first message you receive when beginning a game and, due to the permadeath nature of roguelikes, you're likely to begin several games in a single session of nethack. Out of the

If you've ever played NetHack, the beloved roguelike, then you'll be very familiar with the above image. It's the first message you receive when beginning a game and, due to the permadeath nature of roguelikes, you're likely to begin several games in a single session of nethack. Out of the few reincarnations of the classic game that have appeared in the last couple decades, I'm partial to NetHack4. It provides a nice polish to the original codebase as well as a practically endless list of improvements.

NetHack4's community offers a server to play on but I was interested in hosting my own. After a bit of digging, I was a little surprised that I didn't find a docker image for those who'd like to host their own server. So I created one here!

The most notable aspect of the NetHack4's stack is its use of inetd. I hadn't previously encountered this tool but its been heavily used in the past as a sort of "superserver" which routes Internet requests by spawning other processes, and its main purpose is to conserve server resources by only requiring one daemon to be running to route requests. By passing the -i option to inetd on runtime, the process runs in the foreground and therefore docker can attach to this process on docker run.

You can also use the image as a way to play locally or to connect to a server. Do this by running something like this docker run --entrypoint /app/nethack4 -it kenforthewin/nethack4-server. And here's how to connect to my server:

]]>

ZID is my first foray into Shopify app development. I used the official Shopify Rails engine to get up and running quickly. After toying with a few ideas, I settled on making an app to find and delete products with 0 inventory. The use case is for vendors with large

ZID is my first foray into Shopify app development. I used the official Shopify Rails engine to get up and running quickly. After toying with a few ideas, I settled on making an app to find and delete products with 0 inventory. The use case is for vendors with large amounts of low-inventory products such as a used book store. And the result of this is zid.

Here are a few screenshots of it in action:

All pretty simple stuff. I used Bootstrap and DataTables on the front-end. The products are indexed in ElasticSearch for quick retrieval.

The app is currently being reviewed by Shopify, so I'll update this article when I have a link to it in the Shopify App Store.

]]>

I chose to learn React due to both the hype and the fact that we had begun to code React Native at work. It's been a mostly positive experience with some exceptions here and there. I still don't see much of a point in it in small to midsize projects,

I chose to learn React due to both the hype and the fact that we had begun to code React Native at work. It's been a mostly positive experience with some exceptions here and there. I still don't see much of a point in it in small to midsize projects, where speed of development would point me more towards jQuery/Coffeescript spaghetti than a full, modern front-end framework. Reason being, the most important aspect of early development on these projects is speed of iteration. Everything else comes second, including how reactive the front-end is. But React seems like a nice way to organize 1) massive front-end projects, where organizing into a lot of small components makes the project much cleaner, and 2) one-off components when traditional rendering becomes too slow during scaling. In the case of the former, my logical next step was to pair a Rails api-only app with a React frontend, and add user authentication.

react-rails-auth is a fully dockerized repo that contains the app I coded to learn React and Redux. It's meant to be as unopinionated as possible so that I (and you!) can use it as a starter for any future projects in react/rails.

Nowadays, I'm warming up to the idea of turbolinks and, very recently, stimulus for the html you already have.

]]>

Recently, I've been drawing a lot of inspiration from nginx-proxy. To summarize, nginx-proxy combines docker event-based nginx config generation with automatic nginx reloading. This allows you to define a VIRTUAL_HOST env variable on any container and have it added to the reverse proxy configuration in real time with no

Recently, I've been drawing a lot of inspiration from nginx-proxy. To summarize, nginx-proxy combines docker event-based nginx config generation with automatic nginx reloading. This allows you to define a VIRTUAL_HOST env variable on any container and have it added to the reverse proxy configuration in real time with no additional effort. This saves me a ton of time at work where we host dozens of integrations across several servers, and host as much as 10 different integrations on a single server. There is minimal downtime on deploy as well: just pull the new changes and start a new container with the same virtual host, and nginx-proxy does the rest (make sure to stop the old container though). Since I was doing this kind of deployment a lot, I thought I'd take a shot at automating this process through an api that runs in its own container. From what I've seen, there isn't much out there that fills this sort of niche. You can always host a single-node kubernetes cluster but this introduces a lot of overhead and configuration, definitely not as simple as running a container.

nginx_proxy_zero is my ongoing effort to create this sort of simple, idiomatic interface around deployments on a single instance. It's still rough in terms of features, and there's plenty I want to add to it. For now, there's one endpoint available to you: POST /update_deployment, and the body of your request will look something like this:

So, what's happening here? This setup assumes there is already a running container named nginxproxyzero_some-zerodowntime-service_1 and that the intended action is to update this container via a rolling deployment. Zero will pull the new image and start a new container, then perform a health check. When the new container is healthy, it will rename the new container to the name param and stop/remove the old container. The API is a ruby/sinatra app and, imo, is very easy to read and track what's happening with the underlying API.

Are there tools out there like this? Would you find something like this useful or want to contribute to it? If so please reach out on HN or Twitter.