Main menu

Post navigation

When users look at a website, they rarely know the intricacies of how it is built, how each piece is put together and linked up so that what see and interact with is responsive and robust. In general, websites have a “back-end” and “front-end”. The front-end refers to what is, expectedly, the “front” of a webpage – the layout that you see, the colors, the placement, the animations, etc. The back-end is the core of the functionality of a website, the “behind-the-scenes” worker. It handles inputs that the user gives from the front-end and makes sense of that information and stores it. For example, on a simple log-in page like the one for WordPress, the front-end would be the looks of the actual page itself, and the back-end would handle the email and password you put in through the form, register that you are a valid user, and redirect you to your dashboard.

Let’s focus on the front-end of a website. The look and feel of a website is almost always built using a combination of HTML, CSS and Javascript that work together. The HTML is the bare-bones skeleton of a website. It specifies the text, the forms and any actual content that is necessary for a webpage. The CSS (Cascading style sheet) styles the webpage, specifying colors, fonts, background images, drop shadows, etc. The Javascript is used to make animations, transitions, and more complex interactions on a website. All webpages need HTML, and almost always need CSS, but Javascript isn’t always necessary, especially for static pages.

In a paper by Håkon Wium Lie on the topic of CSS (Cascading Style Sheets), style sheet languages for structured documents on the web is discussed in depth. Style sheets have existed in one form or another since the 1980s, and were developed as a means for creating a consistent approach to providing style information for web documents. As HTML grew, it came to encompass a wider variety of stylistic capabilities to meet the demands of web developers. This evolution gave the designer more control over site appearance at the cost of more complex HTML. Variations in web browser implementations made consistent site appearance difficult, and users had less control over how web content was displayed. Robert Cailliau wanted to separate the structure from the presentation. The ideal way would be to give the user different options and transferring three different kinds of style sheets: one for printing, one for presentation on the screen and one for the editor feature.

To improve web presentation capabilities, nine different style sheet languages were proposed to the World Wide Web Consortium. Of nine proposals, two were chosen as the foundation for what became CSS: Cascading HTML Style Sheets (CSS) and Stream-based Style Sheet Proposal (SSP).

CSS allowed a document’s style to be influenced by multiple style sheets. One style sheet could inherit or “cascade” from another, permitting a mixture of stylistic preferences controlled equally by the site designer and user. One style sheet could inherit or “cascade” from another, permitting a mixture of stylistic preferences controlled equally by the site designer and user. By the end of 1996, CSS was ready to become official.

The CSS 1 specification was completed in 1996, but it was more than three years before any web browser achieved near-full implementation of the specification. Internet Explorer 5.0 for Mac, shipping in March 2000, was the first browser to have full CSS 1 support. Other browsers followed soon afterwards, and many of them additionally implemented parts of CSS 2. CSS has various levels and profiles. Each level of CSS builds upon the last, typically adding new features and typically denoted as CSS 1, CSS 2, CSS 3. Profiles are typically a subset of one or more levels of CSS built for a particular device or user interface. Currently there are profiles for mobile devices, printers, and television sets. Profiles should not be confused with media types, which were added in CSS 2.

In CSS3, there are many options such as animation, gradients, media queries, shadows, transitions, the font-face rule that allows you to embed fonts on a web page, and more.

With the advent of HTML5 and CSS3, developers and designers have a lot more freedom to explore creative forms of visualization on web interfaces. Often compared to Flash, HTML5, when used with CSS3 and Javascript , has the ability to create dynamic websites with animated transitions in a fast and easy way. Moreover, while flash websites required users to first download appropriate software to run the flash itself, HTML5 is supported in all modern browsers and loads extremely quickly.

The ability to create dynamic website opens many new doors for creativity. Web design trends follow technology, so as more and more features of HTML were being invented, new and innovative approaches for displaying media content and information were being made. Recently, the usage of parallax has gained popularity in web design.

The term “parallax” first came from the visual effect of 2D side scrolling videogames that used different background image movement speeds to create the illusion of depth during gameplay. This was generally done by making the background of the game move slower than the foreground in order to make it seem further away.

This same concept applies to parallax site design in which the background of the website moves at a different speed as the rest of the page for an impressive visual effect that allows for countless creative applications for online storytelling. Parallax design gives websites a great opportunity to wow viewers with page depth and animation, take a story-telling approach to guide visitors through the site, and provoke curiosity.

Let’s take a look at two websites that both use parallax web scrolling, but for different purposes. The first one is an online article from the New York Times. The article itself is a short story about cage fighter. As you scroll through the content, the illustrations come alive with clever parallax animations and alterations, allowing the viewer to fully immerse into the content.

As you can see while scrolling through, the images come to life and aide in the storytelling process. This is achieved by having each element in a image be a separate image, and depending on where the user has scrolled, the separate images would move to slightly towards one direction. The scrolling interaction gradually exposes more of the story, revealing video, audio and image galleries at relevant points.

Another example of parallax scrolling can be seen in another type of website – a portfolio. Whereas the New York times article’s main purpose was to tell a story, this portfolio’s main purpose is to convey large amounts of information.

In this site, the use of parallax scrolling is not to add dimension to a 2D website, but to add interesting transitions and effects. The entire website is designed as a single page scrolling site, but as the user scrolls through, the website plays through a series of animations that progressively adds content to the screen.

This use of parallax can be seen as either innovative and interesting, or distracting. On one hand, it is creative because of its animated content. Yet, the constant moving of things can distract the user away from the main content of the page.

The trend for parallax in modern websites shows no signs of stopping. As a technique it has been used by games designers and other artists for many years, and has only in the last two years taken off as a popular way for web designers and developers to show off their skills and get creative

The following two articles give examples of the advantages of using pre-fabrication in their respective fields, bridges and aircraft. I chose to look at these two industries for two very specific reasons. First, because of the incredible stresses/loads they must resist, the parts used in each must be extremely strong and meet rigid standards. Second, should failure occur, the loss of life is nearly guaranteed. Therefore, the parts must be reliable. I can apply these same criteria to my design for tornado-resistant structural connections. The following bullet points are close to or shown exactly as published in the original source.

After determining the function that a connection/joint must perform, the designer/engineer can begin to develop its geometry and select its material. I had this book in a bookmarked folder for several weeks now and have finally been able to read through the chapter on connections. I’m glad that I didn’t get to see it until now, or else I would have really jumped ahead of myself. This construction book is packed with information regarding the options available to designers when developing connections, such as position, fixity, forming and reforming technologies, material, etc. Pros and cons of each option are also discussed, leading to some really helpful insight into just how creative I can be in this process. For instance, I can use direct form-locking connections in conjunction with a material connection, using a geometry of built up components angled just so so that no moments arise through load transfer. This is very exciting! I knew that I wanted to stay away from indirect form-locking connections, where an extra member is used to connect components, such as a dowel or nails, because they feel unnatural in a digitally fabricated design and because they are insufficient to carry the concentrated loads they must transmit. However, before reading this, I didn’t even know it had such a name. Even more importantly, however, rather than feeling like I would be taking an easy way out or being untrue to my goals, I now wish to use some adhesive agent with at least one of my sculpted connection details. Rather than gluing together a butt joint, by carving out a specific geometry between two components, I’d be increasing the surface area for the adhesive to bond to. Digitally fabricated, integrated form-locking design is still the necessary step in making stronger connections.

Other considerations brought up in this book are the assembly and dis-assembly of the connection and feasibility of construction. Can the parts be easily transported to the site? Will special equipment be needed to assemble the building that would undermine pre-fabrication’s cost effectiveness? Are the parts ever meant to be taken apart? How will demolition and recycling be affected? Perhaps without intending to do so, the book gave me some fuel to use in the argument for pre-fabrication (highly controlled environment in labs specially equipped), yet reminded me to be realistic. My construction technique should be as realistic as it is effective.

Shear Connections with rods, angled to eliminate bending moments along the rod.

Written and directed by Saschka Unseld, The Blue Umbrella is a Pixar short telling the story of blue umbrella that meets a red umbrella on the streets. As their respective owners part ways, however, the blue umbrella desperately tries to get back to the red umbrella. While the blue umbrella battles weather and traffic, other street objects, including a personified street gutter, mailing box, and a building pipe vent, try to help bring the blue umbrella to the red umbrella. At the end of the short, the blue umbrella, though battered and dirty, is united with the red umbrella and its happily ever after for the two.

The level of photorealism is unprecedented in this short. Upon the first viewing, it seems as though the video is a blend of live action and animation. The facial expressions on various street objects, however, give away the secret that it is in fact a computer generated animation that uses photorealistic shading, lighting, and compositing. Like many of its shorts, The Blue Umbrella, is used to test yet another one of Pixar’s new technologies, in this case a special global illumination technology. By mathematically modeling each beam of light in every scene, everything that is animated, including rain, looks extremely realistic.

While photorealism was not the original goal of this piece, the idea of making the short photorealistic was formulated during the process of production. As the idea solidified, the team working on this project had to grow increasingly conscious of how to make things look more real. Making efforts such as not showing human faces and setting the scene to be at nighttime to shroud everything in darkness were conscious decisions to preserve the effect. Another technique they utilized was a shallow depth of field, meaning that the camera focuses on objects closer to it rather than further from it. While the shallow depth of field aided in the quest for photorealism, it also helped set a lyrical mood and artistic atmosphere for the film.

Camera movement was especially important for the film cinematography and achieving the goals realism. Because computer generated films are made by a machine, movements, especially camera movements, tend to be extremely smooth. In real life, camera movements can be jerky and angles tend change more often. The point from where the camera shoots from may also be exaggerated in computer generated films. For example, placing a camera in small nook and then panning that camera across a scene is something not feasible in real life. Taking all of these notes in hand, Unseld decided to take a documentary approach to this short. Splicing together only second long clips and shooting from only places where it is reasonable for a human to stand, like standing across the street, Unseld was able to not only create a realistic looking animation, but also a realistic feeling film.

As seen previously in the analysis of Brave, simulating computer generated hair can be tricky business. While our heroine from Brave, Merida, had her own set of issues with wild, red, curly hair, Rapunzel from Tangled, a film produced by Walt Disney Animation Studios, had a new set of problems with her 70 feet of long blonde hair. Though Rapunzel’s hair didn’t have to exhibit the bounce and curl of Merida’s hair, it did have to look voluminous and sleek. The mere length and excess of the hair also made it difficult to predict and correctly calculate its behavior.

a simple hair spring particle system

To combat these issues, Disney developed their own proprietary software, dynamicWires, which used a mass-spring system for curve dynamics. Quickly explaining the most basic representation of hair: a single strand of hair can be visualized as a chain of particles connected by springs. The particles are extremely minute in size and close together so that when they are rendered, it looks like a strand of hair. The spring connections give the flexibility and connection between the particles. In a simple system, a single spring is used between each particles, however, the addition of other springs can give more control to the hair. In Tangled, for behavior such as the piling of hair on top of itself as well as other objects and characters, spring forces were generated as segments of hair collided. This was necessary in order to keep the hair looking voluminous as well as provide the frictional force of hair strands moving in relation to itself.

In order for the story to look convincing, it had to look as though Rapunzel effortlessly drags her hair behind herself. This required the hair to smoothly glide and follow her movements and stop moving as she stops moving. In reality, that amount of hair would require extreme physical effort to move, and so, Disney couldn’t apply normal physics. Instead, they added a small tangential friction parameter for ground contacts. Now the hair could slide along behind her as a mass without falling apart and spreading outward. To make her hair stop when she stopped, a high static friction for ground contacts was added.

Throughout the movie, Rapunzel’s hair seems to have a life on its own, as Rapunzel uses it to accomplish various tasks. This level of control, while maintaining the natural hair look, took some fiddling on Disney’s part. By placing loose springs between strands of hair, they were able to prevent the hair from going everywhere. If two strands were too far apart, however, these springs would break, which gave some freedom to the hair’s behavior.

The sheer amount of hair to be rendered, as can be imagined, would potentially take years to render fully. In order to speed up the process so that a movie could be produced on time, the hair was simulated as curves instead of the usual spring particle system. Curves take exponentially less time to render than a group of particles.

Overall, using such techniques, Disney was able to achieve all 70 feet of Rapunzel’s hair.

Although the content of viral videos is the subject of most academic studies, I am much more interested in the social dynamics that cause videos to become viral in the first place. Jessica Owens of Pulsar Platform lays out a framework of what potential drivers exist for people to share videos.

Social Currency: This is the concept that people believe that the content they share is a reflection of themselves. Therefore, they would pick content to share that makes them look intelligent, cool, creative, different, or whatever image they want to give off

Triggers: Big influencers like celebrities, news sites, or blogs are huge contributors to the spread of videos. Because they have such wide networks, the videos are able to reach many more people initially.

Emotion: Videos that make the viewers feel something – either positively or negatively – are generally more shared than videos that are neutral. These videos spark conversation because of the emotion they give off.

Public: When people see content from somebody they trust in their network, they are more likely to click, watch, and share it. People trust their close friends and family to share content that they will be interested in.

Owens also establishes a framework for uploaders to measure their own video performance. She splits the performance metrics into content metrics and audience metrics. Content metrics include: total YouTube views, lifespan, shares (through Twitter or Facebook analysis), variability (how did attention vary from day to day). Audience metrics include popularity (# of unique users sharing the video over its lifetime), amplification (how influential were the people who shared the video), globality (was the video just in America or did it receive global attention), and diffusion network (groups or hubs that contributed to the spread of the video). I believe these metrics could help me come up with my own framework for analyzing the lifecycles of videos.