On March 16, 2015, something amazing happened in the world of PHP. The long-awaited, hotly debated Scalar Type Declarations RFC was accepted for PHP 7! Finally, it will be possible to declare scalar types (int, float, bool, and string) for function parameters and return values:

The need for safe type casts

By default, scalar types are enforced weakly. So while passing a value such as “my string” to an int parameter would produce an error, values such as 10.9, “42.5”, true, and false would be accepted and cast to 10, 42, 1, and 0, respectively. This behavior lacks safety, since any of these values are likely to be errors, and casting them results in data loss.

Enabling the optional strict mode will prevent values with an incorrect type from being passed, but this isn’t a complete solution. Whenever you are dealing with user input, whether from a posted form, url parameters, or an uploaded CSV, the data will arrive as a string. Before it can be passed to a function expecting an int or float, the data must be converted to the corresponding type.

Wrong. This is even less safe than the default type coercion! A user could pass a value such as “5 hundred” or “ten” and it would be cast to 5 or 0 without producing an error. This is especially concerning in scenarios where sensitive financial information is being handled.

PHP filters?

In the past I’ve tried to solve this problem by using PHP’s built-in FILTER_VALIDATE_INT and FILTER_VALIDATE_FLOAT validation filters. However, there are two problems with this approach. First is verbosity: validating just two inputs for our itemTotal function requires eight additional lines of code:

Introducing PolyCast

In October of last year, Andrea Faulds proposed a Safe Casting Functions RFC to fill the need for safe type conversion. At the same time, I started developing a userland implementation called PolyCast. Although Andrea’s RFC was ultimately declined, I continued to move PolyCast forward, with a number of improvements based on community feedback.

PolyCast comes with two sets of functions. The first (safe_int, safe_float, and safe_string) return true if a value can be cast to the corresponding type without data loss, and false if it cannot. The second (to_int, to_float, and to_string) will directly cast and return a value if it is safe, and otherwise throw a CastException.

For more examples and details on which values are considered safe, check out the project on GitHub. PolyCast is tested on PHP 5.4+, and you can easily install it with composer require theodorejb/polycast.

In my previous post, I described a schema and set of associated queries to persist and and update arbitrarily ordered items in a SQL database (using a linked list). This approach can scale to very large lists, without degrading performance when adding or rearranging items. But having stored a list, how can it be reproduced in the correct order? This post describes an approach to efficiently sort linked lists from SQL in client-side code. While the below examples are written in JavaScript, you could use the same basic technique in almost any modern language.

Suppose that you select the following (unordered) linked list from a database:

The naiveSort function re-loops through the list from the beginning each time it finds the next item and adds it to a sorted copy of the array. The function will return the correctly sorted list, but the number of required iterations exponentially increases as the list lengthens, following the equation size + size * ((size - 1) / 2). For example, a list containing 100 items would require 5,050 iterations, while a list containing 1,000 items would require 500,500! With this approach, any advantage of the linked list’s efficient insertion and reordering would be lost in lengthy sort times.

An efficient sorting algorithm

The mapSort function starts by looping through the linked list a single time, adding the item array indexes to a map with a key of the item’s previous_item_id property. It then follows the chain of item_id references through the map to build the complete sorted list. This approach requires (size * 2) - 1 iterations, allowing it to scale linearly with list length.

Testing with Node.js on my Core i5 desktop PC, the mapSort function was able to sort a 5,000 item list in an average of 2.3 ms, compared to 68.4 ms for naiveSort. With larger lists, the discrepancy grew even greater. Sorting 100,000 items took an average of over 40 seconds with naiveSort, but just 61.7 ms with mapSort!

There are likely other optimizations that could be implemented to further increase performance, but for most practical purposes this technique should prove sufficient.

Recently I was challenged with enabling users to drag and drop items in a list to sort them in any order, and persisting that order in a SQL database. One way to handle this would be to add an index column to the table, which could be updated when an item is reordered. The downside of this approach is that whenever an item is added or moved, the index of every item beneath it must be updated. This could become a performance bottleneck in very large lists.

A more efficient approach is to use a linked list, where each item contains a reference to the previous item in the list, and the first item has a null reference (you could alternatively reference the next item, with the last item containing a null reference, but this requires the list to be sorted back-to-front, which I find less intuitive).

As can be seen, whether an item is added, removed, or reordered, at most three rows will need to be updated. This keeps the performance nearly constant, regardless of the size of the list. With the basic database implementation complete, in my next post I’ll share an approach to efficiently sort the linked list in client-side code.

This post is based on a research paper I wrote for my Introduction to Astronomy course at Rasmussen College earlier this month.

As the Klingon ship bears down on the Starship Enterprise, preparing to fire a barrage of photon torpedoes, Captain Picard shouts “Maximum warp!” and the Enterprise leaps away towards another star system, faster than the speed of light. Is this scene from Star Trek purely science fiction, or is there truth to the concept of interstellar starships? Despite the significant scientific progress that has been made in this area, energy requirements, high costs, and the problem of time still present enormous challenges to the vision of interstellar travel as portrayed in Star Trek and other films.

The nearest star system, Alpha Centauri, is about 25 trillion miles away from Earth. At the speed of a typical spacecraft, it would take more than 900 thousand years to cover this distance (Millis, 2008a)! Clearly, one of the first major barriers to interstellar travel is finding a way to achieve the speeds necessary to reach neighboring stars within a reasonable timeframe. This would require traveling close to the speed of light, at the very least. However, the amount of energy required to accelerate a ship like Star Trek’s Enterprise to just half the speed of light would be “more than 2000 times the total annual energy use of the world today” (Bennett, Donahue, Schneider, & Voit, 2014, p. 715). Where would such vast amounts of energy come from? While many ideas have been proposed, perhaps the most feasible of these was Project Orion, a propulsion system experimented with from the 1950s to 1960s which involved continuously detonating nuclear bombs behind a spaceship to propel it forward. Unfortunately, not only would this approach make for an uncomfortable ride, but it would also expose the crew to dangerous levels of radiation (Dyson, 2002). In the words of Aerospace Engineer Marc G. Millis (2008a), “we need either a breakthrough where we can take advantage of the energy in the space vacuum, a breakthrough in energy production physics, or a breakthrough where the laws of kinetic energy don’t apply” (last paragraph).

Supposing that the energy problem were solved, and a ship could achieve constant acceleration over any distance, it would only take about five years (from Earth’s perspective) to reach the nearest star, and thirteen years to reach Sirius (White, 2002). However, as the distance, acceleration period, and corresponding velocity increase an interesting effect known as time dilation starts to become apparent. The greater the ship’s speed, the slower time will pass for those onboard. A trip which takes thirteen years from the perspective of earth would only take seven years for the travelers, and even less time at rates of acceleration greater than 1G (White, 2002). For really long trips, thousands of years could pass on Earth while the travelers only experience a few decades! While this may seem like a beneficial effect, since it would allow travelers to reach destinations much further than otherwise possible, it presents an enormous problem for interstellar travel. What would be the point of sending individuals to other stars if their work would be of no benefit to those living on Earth? Any trip to a distant destination would almost certainly be one-way.

This brings us to the speculative realm of wormholes and warp drives. The special theory of relativity forbids objects from moving faster than light within space-time, but with enough matter or energy it is known that space-time itself can be warped and distorted (Millis, 2008b). In theory, space could be warped or “folded” to connect two separate points (creating a wormhole). Unfortunately, creating the wormhole would require placing a giant ring (“the size of the Earth’s orbit around the Sun”) of super-dense matter at each end of the wormhole, charging them with enormous amounts of energy, and spinning them up to “near the speed of light” (Millis, 2008b). Even if there were some way to obtain the necessary energy and super-dense matter, how would it be placed at the destination end without first traveling there? While wormholes could hypothetically be useful for frequent travel between two interstellar destinations, they do not provide a viable solution to getting there in the first place.

What about warp drives like those used in Star Trek? While the concept may sound impossible, according to a physicist named Miguel Alcubierre space could theoretically be compressed ahead of the ship and expanded behind it, allowing a ship to travel faster than light without violating the theory of relativity (Peckham, 2012). In effect, it is space that moves, rather than the ship. Unfortunately, creating a warp drive like this would require generating a ring of “negative energy,” and whether it is possible for such energy to exist is still under debate (Millis, 2008b). Assuming it is possible, it seems like this would be the most practical method of interstellar travel. It does not require long periods of time to accelerate and decelerate, passengers would not be jolted from changes in acceleration or pelted with particles of interstellar gas, and best of all time would pass at the same rate for the cosmic travelers as well as those remaining on Earth. NASA is currently in the very early stages of investigating whether such a drive is feasible.

With all the talk surrounding the possibility of moving starships through space at faster-than-light speeds, it is easy to forget that getting the ships into space in the first place is also a problem. In an article published on Gizmodo earlier this year, it was estimated that the cost of constructing a spaceship like the Starship Enterprise using technology available today would be roughly $480 billion (Limer, 2013). Astoundingly, more than 95% of this cost is simply to transport the necessary materials to space! This illustrates the disproportionately high cost of space transportation technology as it currently exists – putting a starship into space simply does not make economic sense at this point, even if we could build one.

In short, the enormous energy requirements, high cost, and problem of time all present significant roadblocks for interstellar travel. The theories proposed for faster-than-light travel are speculative at best, and far from practicality. While a breakthrough in propulsion allowing affordable, safe, and sustained acceleration could potentially allow us to reach the nearest stars, the problem of time dilation would make it infeasible to go further. Without major scientific advances in the areas of negative energy and space-time manipulation, the possibility of visiting an alien home world appears highly unlikely in the foreseeable future.

I originally wrote this post last September as a research paper for my Business Ethics course at Rasmussen College. I decided to post it on my blog now since I still feel strongly about the issues of censorship and online privacy, especially in light of recent leaks about the NSA’s top-secret surveillance programs.

What if the websites and other content we want to access online had to first pass through a filter which determines whether or not the content is favorable to the government, and blocks it if it is deemed critical? Sadly, this scenario is currently a part of life in China. All companies and organizations that operate within the country are required to comply with censorship laws and report the activities of citizens to the government. These conditions presented an interesting dilemma for Internet search giant Google, which was forced to choose between cooperating with government censorship laws and letting another company provide search services to the Chinese. While it is important (and ethical) for international companies to adhere to the laws and regulations of nations in which they operate, if those laws come in conflict with greater ethical interests such as human rights or individual freedom it is arguably better to cease operations in the country, rather than assist the government in its suppression of citizens.

“Don’t be evil.” The phrase was supposed to embody Google’s official corporate philosophy. In January of 2006, however, the company launched Google.cn and began censoring search results for Chinese users. How could a company with such a strong corporate culture practice something so seemingly contrary to their principles? The decision was not made lightly. In 2004, Google policy director Andrew McLaughlin was asked to conduct an ethical analysis with the sole purpose of determining whether Google’s presence would “accelerate positive change and freedom of expression in China” (Levy, 2011, p. 277). After nearly a year of research, McLaughlin determined that while “Google’s presence might benefit China,” the experience of working with a totalitarian government would be morally degrading to Google as an organization (p. 279). Google’s approach to this ethical dilemma demonstrated a teleological moral philosophy. In other words, they evaluated the situation based on its consequences – both to the Chinese people and their company.

Although revenue was very specifically not a consideration in McLaughlin’s report, the business prospects of entering China would have been impossible to ignore. With more Internet users than any other country, China presented an unquestionably alluring business growth opportunity. However, cofounder Larry Page remains resolute that the company was only trying to do the right thing for the people of China. “Nobody actually believes this, but we very strongly made these decisions on what we thought were the best interests of humanity and the Chinese people” (Levy, 2011, p. 280). While Page optimistically believed that Google’s services would benefit the Chinese, his partner Sergey Brin was troubled at the prospect of censorship. As a former refugee of the Soviet Union, Brin had personally experienced the burden of a communist government that imposes constraints on personal freedom (p. 274). In the end, however, Brin, Page, and CEO Eric Schmidt weighed the evil of censorship against the evil of not providing any services to the Chinese, and ultimately agreed that censorship was the lesser evil.

But did Google really have no other alternatives? This was not the case. While search may have been the most profitable of Google’s services, it was not their only service. By the time Google.cn was launched, the company already offered email and mapping solutions that were quickly growing in popularity. Additionally, Google could have pursued new business opportunities that would not require censorship (such as music sales or development platforms). While this approach may have changed little from an individual and societal perspective (if Google did not censor search results in China, someone else would), it would at least avoid the organization degradation caused by working with a totalitarian regime. On the other hand, it is also possible that the Chinese people would have more freedom today if Google had never participated in government censorship. During the time Google.cn was operated, the Chinese government progressively tightened Internet censorship requirements. According to human rights activist Peter Guo, China considers Google to be “one of the greatest threats” to the Communist Party (Dean, 2010). If Google had not compromised their principles, the Chinese business market might have looked less attractive to foreign investors, and the government could have been forced to reduce censorship in order to drive innovation.

From a deontological perspective, avoiding censorship at all costs would simply have been the right thing to do, whether it meant pursuing other business models or staying out of the country entirely. Even if unethical behavior is profitable, executives need to consider the kind of world they are helping to create, and whether or not that concerns them (MacKinnon, 2012, p. xxiii). Google could have continued providing unfiltered search results from outside the country, and while the Chinese government would likely block their search engine much of the time, at least it would be the government doing the censoring, rather than Google.

Google stopped providing censored search results in January, 2010, four years after launching Google.cn. Ironically, the incident prompting this decision was a cyberattack, not censorship. Google discovered that the Chinese government was hacking into the Gmail accounts of Chinese human rights activists and stealing their personal data (it’s not hard to guess for what purpose). According to Google co-founder Sergey Brin this was the “straw that broke the camel’s back” (Spiegel, 2010).

If there is one lesson that can be learned from Google’s foray into government censorship, it is that compromise is not necessary for corporate success, nor does it improve the lives of citizens. Google hoped that by compromising with the government, the government might eventually compromise with them, but the opposite turned out to be true. While the alternatives to censorship may not be as directly profitable, they can still lead to a net benefit since customers and stakeholders will be more willing to trust and support the company. It is also worth pointing out that if Google was prepared to censor search results for the sake of profit in China, how would they have been prepared to fight similar calls for censorship in the United States (such as the Stop Online Piracy Act)? In the end, I’m glad Google did the right thing by stopping their censorship of Chinese search results. I only wish it hadn’t taken a cyberattack for them to make this decision.

This post is based on a research paper I authored earlier this month as part of my General Psychology course at Rasmussen College.

Have you ever been struggling with a website or application that is difficult or confusing to use, and thought, “This could be so much easier if it were designed differently”? You may be surprised to discover that there are actually psychological reasons for what makes an interface good or bad, intuitive or difficult to use. This post will explore three areas of good interface design: proprioception, Gestalt psychology, and performance – with a focus on their application to web-based interactive software.

Proprioception

Proprioception refers to our body’s sense of position in space, and the position of various body parts in relation to each other. Because software is inherently non-physical, designers have to provide cues to indicate the user’s position. On traditional websites, these more often than not included breadcrumbs and navigation menus. However, in today’s world of varying device types and interaction methods (including keyboards, mice, and touch), new metaphors are necessary. One solution that is growing in popularity is to provide transitions between various screens (Bowles, 2013). By convention, leftward movement is seen as backwards, while rightward movement is seen as forwards or progression (Ibid). Vertical movement disrupts this hierarchy and can be used for actions outside the normal app flow.

There are more applications of proprioception than just transitions. For example, considering the Gestalt principle of similarity (discussed later in the essay), a button that leads to a particular section of an app could be designed and located similarly to the button that exits the section. The user would then understand that there is a connection between the two buttons, with the result that they would more intuitively understand how to navigate within the app. By carefully thinking about the logical location for data and controls within an app, and supplying cues to the user to indicate their position, developers can create an experience that requires less learning and feels much more natural to use.

Gestalt Principles

Gestalt is a German word meaning “shape” or “form,” and it refers to the way visual input is perceived by the human mind (Bradley, 2010). Gestalt psychologists have proposed a number of organizational laws (called Gestalt principles) which “specify how people perceive form” (Huffman, 2009, p. 105). The following section will examine five of these principles: Figure and Ground, Similarity, Proximity, Uniform Connectedness, and Parallelism, and how each of them applies to interactive software design.

Figure and Ground refers to the way humans perceive elements as either figure (the object of focus) or ground (the background which contains or supports the figure). When two objects overlap, the smaller object is seen as the figure against the larger background (Bradley, 2010). When designing web applications, it is important to ensure that there is sufficient padding around figures in the interface, and sufficient contrast between the figure and ground – this will allow elements to stand out and make the layout easier to understand at a glance. Consider the following example: the left-hand button’s lack of color, padding, and contrast makes it more difficult to understand at a glance than the button on the right.

The Gestalt principle of Similarity states that objects with a similar appearance will be perceived by humans as related (Bradley, 2010). Similarity can be achieved through shape, color, size, location, and other properties. In interface design, this principle is useful for helping the user to understand which elements are related or part of a group. Designers must be careful, however, to avoid similarity when elements are not related, as users will otherwise perceive an unintended association and find the application more confusing to use.

A third Gestalt principle is Proximity. According to this law of organization, “things that are close to one another are perceived to be more related than things that are spaced farther apart” (Rutledge, 2009). This principle is simple yet powerful, and takes precedence over similarity (Ibid). One practical use of this law in an interface might be a multi-column layout, where each column is separated by empty space and contains a unified collection of related elements or data. If there is not enough space between separate elements, however, it will be more difficult for users to determine that they are unrelated.

According to the principle of Uniform connectedness, elements that are visually connected are perceived as related (Bradley, 2010). As a simple example, consider a speech bubble that is connected to a cartoon character by an arrow. Uniform connectedness trumps both visual similarity and proximity when determining related elements. As with the principle of similarity, this is an important tool for interface designers, but care must be taken that there are not unintended connections between separate parts of an interface (whether because of lines, colors, or other connecting elements). As a personal example, when I was recently planning the interface design for a web app, I considered using the same background color for the headings of two separate sections in the interface. However, I was never satisfied with how it looked. After studying the Gestalt principles, I realized that using the same color visually connected the two headings, causing a perception that they were related when they actually were not. This discovery prompted me to rethink my approach to the interface.

The last Gestalt principle I will examine is Parallelism, which states that parallel elements are perceived as more related than non-parallel elements (Bradley, 2010). A practical application of this to UI design could be rotating a less-related element so that its contents are at a different angle to the rest of the interface (consider the screenshot below, where the “Fork me on GitHub” ribbon is angled away from the rest of the content).

Performance

No matter how good an application’s layout, visual design, and navigation, it still won’t provide a good user experience if it performs poorly. Users expect animations to be smooth, and apps to respond immediately to their taps, clicks, and gestures. User perceptions of time often don’t match reality, however, and for developers this perception is critical. The more steps remembered in a process, the slower it seems (Tepper, 2012). Therefore, reducing the number of steps required to complete an action will make an app “feel” faster, even if the amount of time involved does not significantly change.

Research has shown that actions must take no longer than 50-100 milliseconds to feel instant (Tepper, 2012). Using an online latency demo (which at the time of this writing appears to no longer be available), I found that I personally started noticing a lack of responsiveness after about 75ms. Interestingly, users have a much higher tolerance for delays when there is an indication of progress, so if an action could take longer than 100ms it may be a good idea to give the user some type of feedback (for example, via an animated progress bar or other indicator).

On the Web, optimizing performance and providing feedback can be even more important than in native apps, for three reasons:

Web-based applications will be accessed by a much wider variety of device types and performance classes.

Non-optimized apps can increase bandwidth costs, especially when scaling to thousands or millions of users.

Actions by one user can directly impact the performance of other users on the same server.

Understanding what makes a good interface and usable application is critical to creating the best user experience. Applying the principles of proprioception, Gestalt psychology, and performance optimization/feedback will enhance usability and create an environment that users will love to use and tell their friends about.

If you’re reading this, you probably already know what a CAPTCHA is. The most common form consists of an image with warped or obscured characters which must be entered into a text field. While these image-based CAPTCHAs tend to be effective at stopping spam, they are also poorly accessible, often slow, and require a third-party service or large font files. Surely there must be a better way.

There is. Text-based CAPTCHAs use simple logic questions to weed out bots while remaining accessible to users with disabilities. I found numerous text CAPTCHA implementations floating around the Web, but I was disappointed that they all either relied on a third-party service or required setting up a database. So I decided to make my own.

The result is Responsive Captcha, a PHP library which generates simple, random arithmetic and logic questions, and can be easily integrated into an existing form.