Serverless architecture for Deep Learning

Serverless architecture is a way to build and run applications without having to manage the processing part of the infrastructure. You still need to provision any transient or persistent storage(if needed by your application). But you do not need to buy or rent or provision processing servers.

Serverless architecture lends your application to be more scalable, decoupled, & modular. It reduces the operational overhead of servers. It lets developer focus on what is most important for them, the functionality.

In addition to the benefits generally derived from serverless architecture, this architecture has following benefits specifically for Deep learning,

You do not need to learn any new skills like tensor flow, pytorch etc.

Even for huge data and multiple passes of data through a deep network you do not need a GPU.

You can take some benefit of low cost of AWS Lambda for processing.

Proposed Architecture

Proposed Architecture

Important points about the architecture

Mini batch — The architecture is feasible, only with the mini batch more of processing data instead of processing all the data together in each pass. So you need to divide your data into mini batches in such a way that each execution of each lambda function does not exceed 15 min(limit set by AWS)

API Gateway/Application creator(lambda function) /Application config(Dynamo table)— User will kick start the execution by calling the API. The user will provide in the payload of the API call the following detail — Application ID(unique ID to identify this execution), Number of layers in the network, number of nodes in each layer, Activation function for each layer, Learning rate, regularization parameter etc. Application creator(lambda Function) will store all these details in Application config(Dynamo table). All other lamda functions will refer to this.

SNS notifications — Different Lambda functions will send SNS notification to each other to kick start the next step.

Forward Pass/Backward Pass — A very generic lambda function to process forward pass and backward pass respectively for a given Application ID, mini batch, layer and epoch. It will be able to kick start another instance of itself if it finds that the current layer for it is processing the forward pass is not the last layer in the network(by comparing the current layer with Max layer in the network from Application config.). Forward pass saves its output, the weights for the next layer, in a dynamo table(named Forward Pass). So does the backward pass lambda function in the dynamo table named Backward Pass.

Initializer — Initializes the weights for each execution or pass of a mini batch.

Cost calculator — Calculates the loss function and its differential w.r.t the output of the last layer.