Web apps start small, but complexity grows fast when you start adding features. As your product evolves, you suddenly have to worry about session management, caching, UI performance, data flow, and browser compatibility. Most of these things sound like pretty obvious nuisances of building a web app, right? But when your product is still an idea; when you are still nurturing the essence of what it is, all of those complexities sound very far away. And that's the problem. You know these complexities exist, and that you will eventually need to face them, but you never see them coming. By the time you notice, it's already too late. You have an unintelligible mess of an application.

Unintelligible applications are costly to modify. They are brittle. They resist change. If there is something that I have learned so far it is that requirements do change, and they change often. If requirements are always changing, and complicated applications resist change, then it begs the question: Can you build a billion dollar product when the product constantly fights back at every change you make? Probably. Will you be happy developing it? I doubt it.

Here at Treasure Data we believe that we are building a billion dollar product, maybe even more, and that's why it's important for us to choose a good framework.

A bird’s eye look at the basics of design, including the design cycle and project lifecycle—from discovery, to ideation, to conceptualization, execution, and evaluation.

20:30 - 20:50

The Overview of Server-Side Bulk Loader
サーバーサイドバルクローダーの概要

Muga Nishizawa

This talk presents our new data import mechanism named Server-side Bulk Loader which we recently added to our other traditional data import mechanisms: td-agent, mobile SDK, and td import command. Unlike td-agent which is installed and runs client-side and works by continuously importing data streams into Treasure Data, the Server-side Bulk Loader enables users to upload data in large bulks reliably. The Server-side Bulk Loader runs in the Treasure Data cloud and ingests data from the customer's own AWS S3 buckets into Treasure Data. At the moment, the user can configure server-side bulk ingestions using the command-line interface (td CLI) and execute them as one-off jobs; users can also configure bulk loads as scheduled jobs to ingest data on a periodic basis. Data is extracted directly from the specified AWS S3 bucket, translates into MPC (MessagePack Columnar format) file format, and stored in the Treasure Data's proprietary Plazma storage directly and efficiently. This mechanism relies on the open-source Embulk(*) project, which is a data loader core supporting input and output plugins. This talk will go over the underlying architecture, implementation details, and use cases for Server-side Bulk Loader.