Table of Contents
-
1. Introduction to Serverless
- 1.1 Moving to the Cloud
- 1.2 Enter Serverless
- 1.3 From PaaS to FaaS
- 1.4 From Monolith to Microservices to Functions
- 1.5 FaaS Concepts
- 1.6 FaaS Execution Model
- 1.7 Traditional Scaling vs. Serverless
- 1.8 AWS Lambda Limits
- 1.9 Use Cases
- 1.10 Benefits
- 1.11 Drawbacks
- 1.12 FaaS Providers
- 1.13 Chapter Summary
- 2. The Serverless Framework
- 3. The Go Language
- 4. Building a CRUD API
- 5. Where to go from here
- 6. Glossary
1. Introduction to Serverless
First, let’s have a quick look as to how software was traditionally built.
Web applications are deployed on web servers running on physical machines. As a software developer, you needed to to be aware of the intricacies of the server that runs your software.
To get your application running on the server, you had to spend hours downloading, compiling, installing, configuring, and connecting all sorts of components. The OS of your machines need to be constantly upgraded and patched for security vulnerabilities. In addition, servers need to be provisioned, load-balanced, configured, patched, and maintained.
In short, managing servers is a time-consuming task which often requires dedicated and experienced systems operations personnel.

What is the point of software engineering? Contrary to what some might think, the goal of software engineering isn’t to deliver software. A software engineer’s job is to deliver value - to get the usefulness of software into the hands of users.
At the end of the day, you do need servers to deliver software. However, the time spent managing servers is time you could have spent on developing new features and improving your application. When you have a great idea, the last thing you want to do is set up infrastructure. Instead of worrying about servers, you want to focus more on shipping value.
How can we minimize the time required to deliver impact?
1.1 Moving to the Cloud
Over the past few decades, improvements in both the network and the platform layer - technologies between the operating system and your application - have made cloud computing easier.
Back in the days of yore (the early 1990s) developers only had bare metal hardware available to run their code, and the process of obtaining a new compute unit can take from days to months. Scaling took a lot of detailed planning, a huge amount of time and, most importantly, money. A shift was inevitable. The invention of virtual machines and the hypervisor shrunk the time to provision a new compute unit down to minutes through virtualization. Today, containers gives us a new compute unit in seconds.
DevOps has evolved and matured over this period, leading to the proliferation of Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) providers. These third-party platforms lets you delegate the task of maintaining the execution environment for their code to capable hands, freeing the software developer from server and deployment concerns.

Today, developers have moved away from deploying software on physical computers sitting in their living room. Instead of manually downloading and building a bunch of platform-level technologies on each server instance (and later having to repeat the process when you scale) you can go to a simple web user interface on your PaaS provider of choice, click a few options, and have your application automatically deployed to a fully provisioned cluster.
When your application usage grows, you can add capacity by clicking a few buttons. When you require additional infrastructure components, set up deployment pipelines, or enable database backups, you can do this from the same web interface.
The state of Platform-as-a-Service (PaaS) and cloud computing today is convenient and powerful - but can we do better?
1.2 Enter Serverless
The next major shift in cloud computing is commonly known as “Serverless” or “Functions-as-a-Service” (FaaS.)
Keep in mind that the phrase “serverless” doesn’t mean servers are no longer involved. It simply means that developers no longer need to think that much about them. Computing resources get used as services without having to manage around physical capacities or limits.

Serverless is a software development approach that aims to eliminate the need to manage infrastructure by:
- Using a managed compute service (Functions-as-a-Service) to execute your code, and
- Leveraging external services and APIs (third-party Software-as-a-Service products.)
There is now an abundance of third party services: APIs that handles online payments, transactional email, user analytics, code quality measurement, content management, continuous integration, and other secondary concerns. In our everyday work we also make use of external tools for project management, file sharing, office administration, and more.

Instead of spending valuable resources on building up secondary capabilities such as infrastructure and server maintenance, developers can focus on their core value proposition. Rather than building things from scratch, developers can connect prefabricated parts together and prune away secondary complexity from your application. By making use of third-party services, you can build loosely coupled, scalable, and efficient architectures quickly.
Serverless platforms are a major step towards delegating infrastructure problems to companies that are much better positioned to deal with them. No matter how good you become at DevOps, Amazon / Google / Microsoft will almost certainly have done it better. You can now get all the benefits of a modern container farm architecture without breaking the bank or spending years building it yourself.
1.3 From PaaS to FaaS
How is Functions-as-a-Service different from Platform-as-a-service?
1.3.1 PaaS
Platform-as-a-Service (PaaS) such as Heroku offer support for long running applications and services. When started, long running server processes (also known as daemons) wait for an input, execute some code when an input is received, and then continue waiting for another request. These server processes run 24 / 7 and you are billed every month or hour regardless of actual usage.
PaaS platforms provide a managed virtual server provisioning system that frees you from managing actual servers.
However, you still have to think about the servers. You often need to describe your runtime environment with a Dockerfile
, specify how many instances to provision, enable autoscaling, and so on.
1.3.2 FaaS
Functions-as-a-Service (FaaS) lets you deploy and invoke short-lived (ephemeral) function processes to handle individual requests. Function processes are created when an input event is received, and disappears after the code finishes executing. The platform handles the provisioning of instances, termination, monitoring, logging, and so on. Function processes come into existence only in response to an incoming request and you are billed according to the number of function invocations and total execution time.
FaaS platforms goes a step further than PaaS so you don’t even need to think about how much capacity you need in advance. You just upload your code, select from a set of available languages and runtimes, and the platform deals with the infrastructure.
The table below highlights the differences between PaaS and Faas:
Platform-as-a-Service (PaaS) | Function-as-a-Service (FaaS) | |
---|---|---|
Startup time | Starts in minutes | Starts in milliseconds |
Running time | Runs 24 / 7 | Runs when processing an incoming request |
Cost | Interval billing cycles | Pay for usage, in invocations and duration |
Unit of code | Monolithic codebase | Single-purpose, self-contained functions |
In summary, Functions-as-a-Service (FaaS) platforms offers developers the ability to build services that react to events, that auto-scale, that you pay for per-execution, and that take advantage of a larger ecosystem of cloud services.
1.4 From Monolith to Microservices to Functions
One of the modern paradigms in software development is the shift towards smaller, independently deployable units of code. Monolithic applications are out; microservices are in.
A monolithic application is built as a single unit, where the presentation, business logic, data access layers all exist within the same application. The server-side application will handle HTTP requests, execute domain logic, retrieve and update data from the database, and select and populate HTML views to be sent to the browser. A monolith a single logical executable. Any changes to the system involves building and deploying a new version of the application. Scaling requires scaling the entire application rather than individual parts of it that require greater resource.
In contrast, the idea behind microservices is you break down a monolithic application into smaller services by applying the single responsibility principle at the architectural level. You refactor existing modules within the monolith into standalone micro-services where each service is responsible for a distinct business domain. These microservices communicate with each other over the network (often via RESTful HTTP) to complete a larger goal. Benefits over a traditional monolithic architecture include independent deployability, scalability, language, platform and technology independence for different components, and increased architectural flexibility.
For example, we could create a Users
microservice to be in charge of user registration, onboarding, and other concerns within the User
Domain.
Microservices allow teams to work in parallel, build resilient distributed architectures, and create decoupled systems that can be changed faster and scaled more effectively.

1.4.1 Where Functions fit in
Functions-as-a-Service (FaaS) utilizes smaller unit of application logic in the form of single-purpose functions.
Instead of a monolithic application that you’d run on a PaaS provider, your system is composed of multiple functions working together.
For example, each HTTP endpoint of a RESTful API can be handled by a separate function.
The POST /users
endpoint would trigger a createUser
function, the PATCH /users
endpoint would trigger an updateUser
function, and so on.
A complex processing pipeline can be decomposed to multiple steps, each handled by a function.
Each function is independently deployable and scales automatically. Changes to the system can be localized to just the functions that are affected. In some cases, you can change your application’s workflow by ordering the same functions in a different way.
Functions-as-a-Service goes beyond beyond microservices, enabling developers to create new software applications composed of tiny building blocks.
1.5 FaaS Concepts
Let’s look at the basic building blocks of applications built on FaaS: Events trigger Functions which communicate with Resources.

1.5.1 Functions
A Function is a piece of code deployed in the cloud, which performs a single task such as:
- Processing an image.
- Saving a blog post to a database.
- Retrieving a file.
When deciding what should go in a Function, think of the Single Responsibility Principle and the Unix philosophy:
- Make each program do one thing well.
- Expect the output of every program to become the input to another, as yet unknown, program.
Following these principles lets us maintain high cohesion and maximize code reuse. A Function is meant to be small and self-contained. Let’s look at an example AWS Lambda (Go) Function:
1
// myFunction.go
2
3
package
main
4
5
import
(
6
"context"
7
"fmt"
8
"github.com/aws/aws-lambda-go/lambda"
9
)
10
11
func
HandleRequest
(
ctx
context
.
Context
,
name
string
)
(
string
,
error
)
{
12
return
fmt
.
Sprintf
(
"Hello %s!"
,
name
),
nil
13
}
14
15
func
main
()
{
16
lambda
.
Start
(
HandleRequest
)
17
}
The lambda function above takes
name
as input and returns a greeting based on a name.
FaaS providers have a distinct set of supported languages and runtimes. You are limited to the environments supported by the FaaS provider; one provider may offer an execution environment that is not supported by another. For example, Azure Functions supports C# but AWS Lambda does not support C#. On AWS Lambda, you can write your Functions in the following runtimes (January 2018):
- Node.js – v4.3.2 and 6.10
- Java – Java 8
- Python – Python 3.6 and 2.7
- .NET Core – .NET Core 1.0.1 (C#)
- Go - Go 1.x
With tooling, you can support compiled languages such as Rust which are not natively supported. This works by including executable binaries within your deployment package and having a supported language (such as Node.js) call the binaries.
1.5.1.1 AWS Lambda Function Environment
Each AWS Lambda function also has a 500MB ‘scratch space’ of non-persistent disk space in its own /tmp
directory. The directory content remains when the container is frozen, providing transient cache that can be used for multiple invocations.
Files written to the /tmp
folder may still exist from previous invocations.
However, when you write your Lambda function code, do not assume that AWS Lambda always reuses the container. Lambda may or may not re-use the same container across different invocations. You have no control over if and when containers are created or reused.
1.5.2 Events
One way to think of Events is as signals traveling across the neurons in your brain.

You can invoke your Function manually or you can set up Events to reactively trigger your Function. Events are produced by an event source. On AWS, events can come from:
- An AWS API Gateway HTTP endpoint request (useful for HTTP APIs)
- An AWS S3 bucket upload (useful for file uploads)
- A CloudWatch timer (useful for running tasks every N minutes)
- An AWS SNS topic (useful for triggering other lambdas)
- A CloudWatch Alert (useful for log processing)
The execution of a Function may emit an Event that subsequently triggers another Function, and so on ad infinitum - creating a network of functions entirely driven by events. We will explore this pattern in Chapter 4.
1.5.3 Resources
Most applications require more than a pure functional transformation of inputs. We often need to capture some stateful information such as user data and user generated content (images, documents, and so on.)
However, a Function by itself is stateless. After a Function is executed none of the in-process state within will be available to subsequent invocations. Because of that, we need to provision Resources in the form of an external database or network file storage to store state.

Resources are infrastructure components which your Functions depend on, such as:
- An AWS DynamoDB Table (for saving user and application data)
- An AWS S3 Bucket (for saving images and files)
1.6 FaaS Execution Model
AWS Lambda executes functions in an isolated container with resources specified in the function’s configuration (which defines the function container’s memory size, maximum timeout, and so on.) The FaaS platform takes care of provisioning and managing any resources needed to run your function.
The first time a Function is invoked after being created or updated, a new container with the appropriate resources will be created to execute it, and the code for the function will be loaded into the container. Because it takes time to set up a container and do the necessary bootstrapping, AWS Lambda has an initial cold start latency. You typically see this latency when a Lambda function is invoked for the first time or after it has been updated.

After a Function is invoked, AWS Lambda keeps the container warm for some time in anticipation of another function invocation. AWS Lambda tries to reuse the container for subsequent invocations.
1.7 Traditional Scaling vs. Serverless
One of the challenges in managing servers is allocating compute capacity.
Web servers need to be provisioned and scaled with enough compute capacity to match the amount of inbound traffic in order to run smoothly. With traditional deployments, you can find yourself over-provisioning or under-provisioning compute capacity. This is especially true when your traffic load is unpredictable. You can never know when your traffic will peak and to what level.

When you over-provision compute capacity, you’re wasting money on idle compute time. Your servers are just sitting there waiting for a request that doesn’t come. Even with autoscaling the problems of paying of idle time still persists, albeit at a lesser degree. When you under-provision, you’re struggling to serve incoming requests (and have to contend with dissatisfied users.) Your servers are overwhelmed with too many requests. Compute capacity is usually over-provisioned, and for good reason. When there’s not enough capacity, bad things can happen.

When you under-provision and the queue of incoming requests grows too large, some of your users’ requests will time out. This phenomena is commonly known as the ‘Reddit Hug of Death’ or the Slashdot effect. Depending on the nature of your application, users will find this occurrence unacceptable.
With Functions-as-a-Service, you get autoscaling out of the box. Each incoming request spawns a short-lived function process that executes your function. If your system needs to be processing 100 requests at a specific time, the provider will spawn that many function processes without any extra configuration on your part. The provisioned server capacity will be equal to the number of incoming requests. As such, there is no under or over-provisioning in FaaS. You get instant, massive parallelism when you need it.
1.7.1 AWS Lambda Costs
With Serverless, you only pay for the number of executions and total execution duration. Since you don’t have to pay for idle compute time, this can lead to significant cost savings.
1.7.1.1 Requests
You are charged for the total number of execution requests across all your functions. Serverless platforms such as AWS Lambda counts a request each time it starts executing in response to an event notification or invoke call, including test invokes from the console.
- First 1 million requests per month are free
- $0.20 per 1 million requests thereafter ($0.0000002 per request)
1.7.1.2 Duration
Duration is calculated from the time your code begins executing until it returns or otherwise terminates, rounded up to the nearest 100ms. The price depends on the amount of memory you allocate to your function. You are charged $0.00001667 for every GB-second used.
1.7.1.3 Pricing Example
If you allocated 128MB of memory to your function, executed it 30 million times in one month, and it ran for 200ms each time, your charges would be calculated as follows:
1.7.1.3.1 Monthly compute charges
The monthly compute price is $0.00001667 per GB-s and the free tier provides 400,000 GB-s.
- Total compute (seconds) = 30M * (0.2sec) = 6,000,000 seconds
- Total compute (GB-s) = 6,000,000 * 128MB/1024 = 750,000 GB-s
- Total Compute – Free tier compute = Monthly billable compute seconds
- 750,000 GB-s – 400,000 free tier GB-s = 350,000 GB-s
- Monthly compute charges = 350,000 * $0.00001667 = $5.83
1.7.1.3.2 Monthly request charges
The monthly request price is $0.20 per 1 million requests and the free tier provides 1M requests per month.
- Total requests – Free tier request = Monthly billable requests
- 30M requests – 1M free tier requests = 29M Monthly billable requests
- Monthly request charges = 29M * $0.2/M = $5.80
1.7.1.3.3 Total charges
Total charges = Compute charges + Request charges = $5.83 + $5.80 = $11.63 per month
For more details, check out the official AWS Lambda pricing docs.
1.7.1.3.4 Additional charges
In a typical web-connected application that needs HTTP access, you’ll also incur API Gateway costs at $3.50 per 1M executions.
1.8 AWS Lambda Limits
The execution environment AWS Lambda gives you has some hard and soft limits. For example, the size of your deployment package or the amount of memory your Lambda function is allocated per invocation.
1.8.1 Invocation Limits

Some things of note:
- Memory: Each Function can have an allocated memory (defaults to 128MB.) Doubling a function’s allocated memory will also double the amount you are billed per 100ms. If you’re performing some memory-intensive tasks, increasing the memory allocation can lead to increased performance.
-
Space: Your Functions have a 512MB ‘scratch’ space that’s useful for writing temporary files to disk. Note that there is no guarantee that files written to the
/tmp
space will be available in subsequent invocations. - Execution Time: Fifteen minutes is the longest time a function can execute. Exceeding this duration will immediately terminate the execution.
- Deployment Package Size: Your zipped software package cannot exceed 50MB. You can configure your Lambda function to pull in additional code and content in the form of layers up to 250MB.
1.8.2 Concurrency Limits

Concurrent executions refers to the number of executions of your function code that are happening at any given time. You can use the following formula to estimate your concurrent Lambda function invocations:
1
events
(
or
requests
)
per
second
*
function
duration
For example, consider a Lambda function that processes Amazon S3 events. Suppose that the Lambda function takes on average three seconds and Amazon S3 publishes 10 events per second. Then, you will have 30 concurrent executions of your Lambda function.
By default, AWS Lambda limits the total concurrent executions across all functions within a given region to 1000. Any invocation that causes your function’s concurrent execution to exceed the safety limit is throttled. In this case, the invocation doesn’t execute your function.
1.8.3 AWS Lambda Limit Errors
Functions that exceed any of the limits listed in the previous limits tables will fail with an exceeded limits exception. These limits are fixed and cannot be changed at this time. For example, if you receive the exception CodeStorageExceededException or an error message similar to “Code storage limit exceeded” from AWS Lambda, you need to reduce the size of your code storage.
Each throttled invocation increases the Amazon CloudWatch Throttles metric for the function, so you can monitor the number of throttled requests. The throttled invocation is handled differently based on how your function is invoked:
1.8.3.1 Synchronous Invocation
If the function is invoked synchronously and is throttled, the invoking application receives a 429 error and the invoking application is responsible for retries.
1.8.3.2 Asynchronous Invocation
If your Lambda function is invoked asynchronously and is throttled, AWS Lambda automatically retries the throttled event for up to six hours, with delays between retries.
1.8.3.3 Stream-based Invocation
For stream-based event sources (Amazon Kinesis Streams and Amazon DynamoDB streams), AWS Lambda polls your stream and invokes your Lambda function. When your Lambda function is throttled, AWS Lambda attempts to process the throttled batch of records until the time the data expires. This time period can be up to seven days for Amazon Kinesis Streams. The throttled request is treated as blocking per shard, and Lambda doesn’t read any new records from the shard until the throttled batch of records either expires or succeeds.
1.8.4 Increasing your concurrency limit
To request a concurrent executions limit increase:
- Open the AWS Support Center page, sign in if necessary, and then choose Create case.
- For Regarding, select Service Limit Increase.
- For Limit Type, choose Lambda, fill in the necessary fields in the form, and then choose the button at the bottom of the page for your preferred method of contact.
1.9 Use Cases
FaaS can be applied to a variety of use cases. Here are some examples.
1.9.1 Event-driven File Processing
You can create functions to thumbnail images, transcode videos, index files, process logs, validate content, aggregate and filter data, and more, in response to real-time events.
Multiple lambda functions could be invoked in response to an event. For example, to create differently sized thumbnails of an image (small, medium, large) you can trigger three lambda function in parallel, each with different dimension inputs.

Here’s an example architecture of a serverless asset processing pipeline:
- Whenever a file is uploaded to an S3 bucket,
- A lambda function is triggered with details about the uploaded file.
- The lambda function is executed, performing whatever processing we want it to do.
A major benefit of using FaaS for this use case is you don’t need to reserve large server instances to handle the occasional traffic peaks. Since your instances will be idle for most of the day, going the FaaS route can lead to major cost savings.
With FaaS, if your system needs to process 100 requests at a specific time, your provider will spawn that many function processes without any extra configuration on your part. You get instant compute capacity when you need it and avoid paying for idle compute time.
In Chapter 4, we will explore this pattern by building an event-driven image processing backend.
1.9.2 Web Applications
You can use FaaS together with other cloud services to build scalable web applications and APIs. These backends automatically scale up and down and can run in a highly available configuration across multiple data centers – with zero administrative effort required for scalability, back-ups, or multi-data center redundancy.

Here’s an example architecture of a serverless backend:
- Frontend clients communicate to the backend via HTTP.
- An API gateway routes HTTP requests to different lambda functions.
- Each lambda function has a single responsibility, and may communicate with other services behind the scenes.
Serverless web applications and APIs are highly available and can handle sudden traffic spikes, eliminating the Slashdot Effect. Going FaaS solves a common startup growing pain in which teams would rewrite their MVP in a different stack in order to scale. With Serverless, you can write code that scales from day 1.
In Chapters 5 and 6, we will explore this use case by building a parallelized web scraping backend and a CRUD commenting backend.
1.9.3 Webhooks
Webhooks (also known as ‘Reverse APIs’) lets developers create an HTTP endpoint that will be called when a certain event occurs in a third-party platform. Instead of polling endlessly for an update, the third-party platform can notify you of new changes.
For example, Slack uses incoming webhooks to post messages from external sources into Slack and outgoing webhooks to provide automated responses to messages your team members post.
Webhooks are powerful and flexible: they allow customers to implement arbitrary logic to extend your core product. However, webhooks are an additional deployable component your clients need to worry about. With FaaS, developers can write webhooks as Functions and not have to worry about provisioning, availability, nor scaling.
FaaS also helps platforms that use webhooks to offer a smoother developer experience. Instead of having your customers provide a webhook URL to a service they need to host elsewhere, serverless webhooks lets users implement their extension logic directly within your product. Developers would write their webhook code directly on your platform. Behind the scenes, the platform depploys the code to a FaaS provider.

The advantages of going serverless for webhooks are similar to those for APIs: low overhead, minimal maintenance, and automatic scaling. An example use case is setting up a Node.js webhook to process SMS requests with Twilio.
1.9.4 Scheduled Tasks
You can schedule functions to be executed at intervals, similar to a cron job. You can perform health checks, regular backups, report generation, performance testing, and more without needing to run an always-on server.
1.10 Benefits
1.10.1 High Availability and Scalability
A FaaS provider handles horizontal scaling automatically for you, spawning as many function processes as necessary to handle all incoming requests. FaaS providers also guarantees high availability, making sure your functions are always up.
As a developer, you are freed from having to think about provisioning multiple instances, load balancing, circuit breaking, and other aspects of deployment concerns. You can focus on developing and improving your core application.
1.10.2 Less Ops
There’s no infrastructure for you to worry about. Tasks such as server configuration and management, patching, and maintenance are taken care of by the vendor. You’re responsible only for your own code, leaving operational and administrative tasks to capable hands.
However, operational concerns are not completely eliminated; they just take on new forms. From an operational perspective, serverless architectures introduces different considerations such as the loss of control over the execution environment and the complexity of managing many smaller deployment units, resulting in the need for much more sophisticated insights and observability solutions. Monitoring, logging, and distributed tracing are of paramount importance in Serverless architectures.
1.10.3 Granular Billing
With traditional PaaS, you are billed in interval (monthly / daily / hourly) cycles because your long-running server processes are running 24 / 7. Most of the time, this means you are paying for idle compute time.
FaaS billing are more granular and cost effective, especially when traffic loads are uneven or unexpected. With AWS Lambda you only pay for what you use in terms of number of invocations and execution time in 100 millisecond increments. This leads to lower costs overall, because you’re not paying for idle compute resources.
1.11 Drawbacks
1.11.1 Vendor lock-in
When you use a cloud provider, you delegate much of the server control to a third-party. You are tightly coupled to any cloud services that you depend on. Porting your code from one platform or cloud service to another will require moving large chunks of your infrastructure.
On the other hand, big vendors aren’t going anywhere. The only time this really matters is if your organization itself has a business requirement to have multi-cloud vendors. Note that building a cross-cloud solution is a time-consuming process: you would need to build abstractions above the cloud to essentially standardise the event creation and ingestion as well as the services that you need.
1.11.2 Lack of control
FaaS platforms are a black box.
Since the provider controls server configuration and provisioning, developers have limited control of the execution environment.
AWS Lambda lets you pick a runtime, configure memory size (from 128MB to 3GB), and configure timeouts (up to 300 seconds or 5 minutes) but not much else.
The 5 minute maximum timeout makes plain AWS Lambda unsuitable for long-running tasks. AWS Lambda’s /tmp
disk space is limited to 512MB also makes is unsuitable for certain tasks such as processing large videos.
Over time, expect these limits to increase. For example, AWS announced a memory size limit increase from 1.5GB to 3GB during in November 2017.
1.11.3 Integration Testing is hard
The characteristics of serverless present challenges for integration testing:
- A serverless application is dependent on internet/cloud services, which are hard to emulate locally.
- A serverless application is an integration of separate, distributed services, which must be tested both independently, and together.
- A serverless application can feature event-driven, asynchronous workflows, which are hard to emulate entirely.
Fortunately, there are now open source projects such as localstack which lets you run a fully functional local AWS cloud stack. Develop and test offline!
1.12 FaaS Providers
There are a number of FaaS providers currently on the market, such as:
- AWS Lambda
- Google Cloud Functions
- Azure Functions
- IBM OpenWhisk
When choosing which FaaS provider to use, keep in mind that each provider has a different set of available runtimes and event triggers. See the table below for a comparison of event triggers available on different FaaS providers (not an exhaustive list):
Amazon Web Services (AWS) | Google Cloud Platform (GCP) | Microsoft Azure | IBM | |
---|---|---|---|---|
Realtime messaging | Simple Notification Service (SNS) | Cloud Pub/Sub | Azure Service Bus | IBM Message Hub |
File storage | Simple Storage Service (S3) | Cloud Storage | Azure Storage | ? |
NoSQL database | DynamoDB | Firebase | ? | IBM Cloudant |
Logs | Cloudwatch | Stackdriver | ? | ? |
HTTP | Yes (API Gateway) | Yes | Yes | Yes (API Gateway) |
Timer / Schedule | Yes | Yes | Yes | Yes |
For the rest of this book, we will be using AWS Lambda.
1.13 Chapter Summary
In serverless, a combination of smaller deployment units and higher abstraction levels provides compelling benefits such as increased velocity, greater scalability, lower cost, and the ability to focus on product features. In this chapter, you learned about:
- How serverless came to be and how it compares to PaaS.
- The basic building blocks of serverless. In serverless applications, Events trigger Functions. Functions communicate with cloud Resources to store state.
- Serverless benefits, drawbacks, and use cases.
In the next chapter, you will look at the Serverless framework and set up your development environment.
2. The Serverless Framework
2.1 Introduction

The Serverless framework (henceforth serverless
) is a Node.js command-line interface (CLI) that lets you develop and deploy serverless functions, along with any infrastructure resources they require.
The serverless
framework lets you write functions, add event triggers, and deploy to the FaaS provider of your choice.
Functions are automatically deployed and events are compiled into the syntax your FaaS provider understands.
serverless
is provider and runtime agnostic, so you are free to use any supported FaaS providers and languages. As of January 2018, the framework supports the following FaaS providers:
- Amazon Web Services (AWS Lambda)
- Google Cloud Platform (Google Cloud Functions)
- Microsoft Azure (Azure Functions)
- IBM OpenWhisk
- Kubeless
- Spotinst
- Webtasks
Out of the box, the Serverless framework gives you:
- Structure: The framework’s unit of deployment is a ‘service’, a group of related functions.
- Best practices: Support for multiple staging environments, regions, environment variables, configs, and more.
- Automation: A handful of useful options and commands for packaging, deploying, invoking, and monitoring your functions.
- Plugins: Access to an active open-source ecosystem of plugins that extend the framework’s behaviour.
2.2 Installation
To install serverless
, you must first install Node.js on our machine.
The best way to manage Node.js versions on your machine is to use the Node Version Manager (nvm
), so we’ll install that first.
Follow the step-by-step instructions below.
2.2.1 Install Node Version Manager
First, we’ll install nvm
, which lets you manage multiple Node.js versions on your machine and switch between them.
To install or update nvm
, you can use the install script using cURL:
1
>
curl
-
o
-
https
:
//raw.githubusercontent.com/creationix/nvm/v0.33.2/install.sh | \
2
bash
or Wget:
1
>
wget
-
qO
-
https
:
//raw.githubusercontent.com/creationix/nvm/v0.33.2/install.sh |\
2
bash
To verify that nvm
has been installed, do:
1
>
command
-
v
nvm
2
nvm
Next, let’s install Node.js.
2.2.2 Install Node.js
To install the latest Node.js version that AWS Lambda supports, do:
1
>
nvm
install
v6
.10.3
To set a default Node.js version to be used in any new shell, use the nvm
alias ‘default’:
1
>
nvm
alias
default
v6
.10.3
To verify that the correct Node.js version has been installed, do:
1
>
node
-
v
2
v6
.10.3
The Node Package Manager (npm
) should also have been installed:
1
>
npm
-
v
2
3.10.10
For more nvm
installation and usage instructions, check out the official nvm
repo.
2.2.3 Install the Serverless Framework
Install the serverless
node module with npm
. In your terminal, do:
1
>
npm
install
serverless
-
g
Let’s check that serverless
has been installed:
1
>
serverless
--
version
2
1.13.2
Type serverless help
to see all available commands:
1
>
serverless
help
2
3
Commands
4
*
Serverless
documentation
:
http
:
//docs.serverless.com
5
*
You
can
run
commands
with
"serverless"
or
the
shortcut
"sls"
6
*
Pass
"--help"
after
any
<
command
>
for
contextual
help
7
8
config
........................
Configure
Serverless
9
config
credentials
............
Configures
a
new
provider
profile
for
the
Serverl
\
10
ess
Framework
11
create
........................
Create
new
Serverless
service
12
install
.....................
..
Install
a
Serverless
service
from
GitHub
13
package
.....................
..
Packages
a
Serverless
service
14
package
function
............
..
undefined
15
deploy
........................
Deploy
a
Serverless
service
16
deploy
function
...............
Deploy
a
single
function
from
the
service
17
deploy
list
..................
.
List
deployed
version
of
your
Serverless
Service
18
invoke
........................
Invoke
a
deployed
function
19
invoke
local
..................
Invoke
function
locally
20
info
........................
..
Display
information
about
the
service
21
logs
........................
..
Output
the
logs
of
a
deployed
function
22
metrics
.....................
..
Show
metrics
for
a
specific
function
23
remove
........................
Remove
Serverless
service
and
all
resources
24
rollback
.....................
.
Rollback
the
Serverless
service
to
a
specific
dep
\
25
loyment
26
slstats
.....................
..
Enable
or
disable
stats
In Chapter 3, we will go through some of these commands in more detail.
For a full reference of serverless
CLI commands, read the official docs.
2.3 Getting Started
2.3.1 Development Workflow
Here is a typical workflow when building applications using the serverless
CLI:
-
serverless create
to bootstrap a Serverless project. - Implement your functions.
-
serverless deploy
to deploy the current state of the project. -
serverless invoke
or manually invoke to test the live function. -
serverless logs
to stream your function’s logs. - Implement and run unit tests for your functions locally with mocks.
2.3.2 Project structure
Here is a typical serverless
Go project structure:
1
.
2
+--
src
/
3
|
+--
handlers
/
4
|
|
+--
addTodo
.
go
5
|
|
+--
listTodos
.
go
6
|
+--
lib
/
7
+--
serverless
.
yml
8
+--
.
gitignore
Tests go into the /test
directory. Here is where the unit tests for our functions and supporting code lives. Test inputs are stored in /test/fixtures
.
The serverless.yml
config file is in our service’s root directory.
2.3.3 serverless.yml
The serverless.yml
file describes your application’s functions, HTTP endpoints, and supporting resources.
It uses a DSL that abstracts away platform-specific nuances.
Have a look at an example serverless.yml
below:
1
service
:
test
-
service
2
3
provider
:
4
name
:
aws
5
runtime
:
go1
.
x
6
stage
:
dev
7
region
:
us
-
east
-
1
8
9
#
you
can
define
service
wide
environment
variables
here
10
#
environment
:
11
#
variable1
:
value1
12
13
#
you
can
add
packaging
information
here
14
#
package
:
15
#
include
:
16
#
-
include
-
me
.
js
17
#
-
include
-
me
-
dir
/**
18
#
exclude
:
19
#
-
exclude
-
me
.
js
20
#
-
exclude
-
me
-
dir
/**
21
22
functions
:
23
hello
:
24
handler
:
handler
.
hello
25
26
#
The
following
are
a
few
example
events
you
can
configure
27
#
events
:
28
#
-
http
:
29
#
path
:
users
/
create
30
#
method
:
get
31
#
-
s3
:
$
{
env
:
BUCKET
}
32
#
-
schedule
:
rate
(
10
minutes
)
33
#
-
sns
:
greeter
-
topic
34
#
-
stream
:
arn
:
aws
:
dynamodb
:
region
:
XXXXXX
:
table
/
foo
/
stream
/
1970
-
01
-
01
T00
:
00
\
35
:
00.000
36
#
-
alexaSkill
37
#
-
iot
:
38
#
sql
:
"SELECT * FROM 'some_topic'"
39
#
-
cloudwatchEvent
:
40
#
event
:
41
#
source
:
42
#
-
"aws.ec2"
43
#
detail
-
type
:
44
#
-
"EC2 Instance State-change Notification"
45
#
detail
:
46
#
state
:
47
#
-
pending
48
#
-
cloudwatchLog
:
'
/
aws
/
lambda
/
hello
'
49
50
#
Define
function
environment
variables
here
51
#
environment
:
52
#
variable2
:
value2
53
54
#
you
can
add
statements
to
the
Lambda
function
'
s
IAM
Role
here
55
#
iamRoleStatements
:
56
#
-
Effect
:
"Allow"
57
#
Action
:
58
#
-
"s3:ListBucket"
59
#
Resource
:
{
"Fn::Join"
:
[
""
,
[
"arn:aws:s3:::"
,
{
"Ref"
:
"ServerlessDeplo\
60
ymentBucket"
}
]
]
}
61
#
-
Effect
:
"Allow"
62
#
Action
:
63
#
-
"s3:PutObject"
64
#
Resource
:
65
#
Fn
::
Join
:
66
#
-
""
67
#
-
-
"arn:aws:s3:::"
68
#
-
"Ref"
:
"ServerlessDeploymentBucket"
69
#
-
"/*"
70
71
#
you
can
add
CloudFormation
resource
templates
here
72
#
resources
:
73
#
Resources
:
74
#
NewResource
:
75
#
Type
:
AWS
::
S3
::
Bucket
76
#
Properties
:
77
#
BucketName
:
my
-
new
-
bucket
78
#
Outputs
:
79
#
NewOutput
:
80
#
Description
:
"Description for the output"
81
#
Value
:
"Some output value"
In the serverless.yml
, you define your Functions, the Events that trigger them, and the Resources your Functions use.
The serverless
CLI reads and translates this file into a provider specific language such as AWS CloudFormation so that everything is set up in your FaaS provider.
2.3.3.1 Events
If you are using AWS as your provider, all events in the service are anything in AWS that can trigger an AWS Lambda function, like an S3 bucket upload, an SNS topic, and HTTP endpoints created via API Gateway.
You can define what events can trigger your functions in serverless.yml
:
1
#
serverless
.
yml
2
3
functions
:
4
createUser
:
#
Function
name
5
handler
:
bin
/
handlers
/
createUser
6
events
:
7
-
http
:
8
path
:
users
/
create
9
method
:
get
10
-
s3
:
$
{
env
:
BUCKET
}
11
-
schedule
:
rate
(
10
minutes
)
12
-
sns
:
greeter
-
topic
2.3.3.2 Custom Variables
The Serverless framework provides a powerful variable system which allows you to add dynamic data into your serverless.yml
.
You can define custom
variables such as imagesBucketName
in serverless.yml
that you can re-use throughout the project.
In the serverless.yml
, we define a custom
block with variables such as imagesBucketName
that we can re-use throughout the file.
1
#
serverless
.
yml
2
3
custom
:
4
imagesBucketName
:
snapnext
-
images
5
6
provider
:
7
...
8
environment
:
9
IMAGES_BUCKET_NAME
:
$
{
self
:
custom
.
imagesBucketName
}
2.3.3.3 Environment Variables
You can also define Environment variables such as process.env.IMAGES_BUCKET_NAME
in the provider.environment
block. You can use the environment variables you define within your functions:
In the serverless.yml
, we expose the bucket’s name custom.imagesBucketName
as an environment variable IMAGES_BUCKET_NAME
.
We can then read the environment variable from a function as process.env.IMAGES_BUCKET_NAME
:
1
#
serverless
.
yml
2
3
custom
:
4
imagesBucketName
:
snapnext
-
images
5
6
provider
:
7
...
8
environment
:
9
IMAGES_BUCKET_NAME
:
$
{
self
:
custom
.
imagesBucketName
}
2.3.3.4 Resources
Defining AWS Resources such as S3 buckets and DynamoDB requires some familiarity with AWS CloudFormation syntax. CloudFormation is an AWS-specific language used to define your AWS Infrastructure as code:
1
#
serverless
.
yml
2
3
resources
:
4
Resources
:
5
ImagesBucket
:
6
Type
:
AWS
::
S3
::
Bucket
7
Properties
:
8
BucketName
:
$
{
self
:
custom
.
imagesBucketName
}
9
AccessControl
:
PublicRead
2.3.3.5 IAM Role Statements
By default, Functions lack access to AWS resources such as S3 buckets and DynamoDB tables. AWS Lambda executes your Lambda function on your behalf by assuming the role you provided at the time of creating the Lambda function. Therefore, you need to grant the role the necessary permissions that your Lambda function needs, such as permissions for Amazon S3 actions to read an object.
You can give these Functions access to resources by writing Identity and Access Management (IAM) role statements for your Function’s role. For example:
1
#
serverless
.
yml
2
3
custom
:
4
imagesBucketName
:
snapnext
-
images
5
6
provider
:
7
iamRoleStatements
:
8
-
Effect
:
Allow
9
Action
:
10
-
S3
:
GetObject
11
Resource
:
12
-
Fn
::
Join
:
13
-
""
14
-
-
"arn:aws:s3:::"
15
-
"${self:custom.imagesBucketName}/*"
The above role statement allows our Functions to retrieve objects from an S3 bucket.
Each IAM role statement has three attributes: Effect
, Action
, and Resource
. Effect
can be either Allow
or Deny
. Action
refers to specific AWS operations such as S3:GetObject
.
Resource
points to the arn
of the specific AWS resources to grant access to.
Always remember to specify the minimum set of permissions your lambda functions require. For a list of available IAM Actions, refer to the official AWS IAM reference.
Keep in mind that provider.iamRoleStatements
applies to a single IAM role that is created by the Serverless framework and is shared across your Functions.
Alternatively, you can create one role per function by creating an AWS::IAM::Role
Cloudformation resource and specifying what role is used for the Function:
1
#
serverless
.
yml
2
3
functions
:
4
usersCreate
:
5
...
6
role
:
arn
:
aws
:
iam
::
XXXXXX
:
role
/
role
#
IAM
role
which
will
be
used
for
this
fu
\
7
nction
2.3.4 Serverless Plugins
Plugins lets you extend beyond the framework’s core features. With plugins, you can:
- Add support for other FaaS providers such as Google Cloud Functions, Azure Functions, and Kubeless.
- Get more information about your stack.
- Keep your functions warm.
- Coordinate multi-step workflows with AWS Step Functions.
- Use the latest JS version with Babel via Webpack.
- And more!
2.3.4.1 Installing Plugins
serverless
plugins are packaged as Node.js modules that you can install with NPM:
1
>
cd
my
-
app
2
>
npm
install
--
save
custom
-
serverless
-
plugin
Plugins are added on a per service basis and are not applied globally. Make sure you are in your service’s root directory when you install a plugin!
2.3.4.2 Using Plugins
Including and configuring your plugin is done within your project’s serverless.yml
.
The custom
block in the serverless.yml
file is the place where you can add necessary configurations for your plugins, for example:
1
plugins
:
2
-
custom
-
serverless
-
plugin
3
4
custom
:
5
customkey
:
customvalue
In the above the custom-serverless-plugin
is configured with a custom.customkey
attribute.
Each plugin should have documentation on what configuration options are available.
2.4 Additional Setup
Before we can continue to the hands-on section, there are a few more things you need to set up.
2.4.1 Amazon Web Services (AWS) Setup
2.4.1.1 AWS Account Registration
For the rest of this book, you’ll be using AWS as your FaaS provider. If you haven’t already, sign up for an AWS account!
Once you’ve signed up, you’ll need to create an AWS user which has administrative access to your account. This user will allow the Serverless Framework configure the services in your AWS account.
First, Login to your AWS account. Go to the Identity & Access Management (IAM) page.

Click on the Users sidebar link.
Click on the Add user button. Enter serverless-admin, tick the Programmatic access Access type checkbox, and select Next: Permissions.

Click Attach existing policies directly, tick Administrator Access, and select Next: Review.

Review your choices, then select Create user.

Save the Access key ID and Secret access key of the newly created user.

Done! We’ve now created a user which can perform actions in our AWS account on our behalf (thanks to the Administrator Access policy).
2.4.1.2 Set Up Credentials
Next, we’ll pass the user’s API Key & Secret to serverless
.
With the serverless
framework installed on your machine, do:
1
serverless
config
credentials
--
provider
aws
--
key
<
your_aws_key
>
--
secret
<
your_
\
2
aws_secret
>
Take a look at the config CLI reference for more information.
2.5 Chapter Summary
In this chapter, you learned about the Serverless framework and set up your development environment.
In the next chapter, you will build a simple application with the Serverless framework.
3. The Go Language
3.1 Why Go?
The Go language has:
- Incredible runtime speed.
- Amazing concurrency abstraction (goroutines.)
- A great batteries-included standard library
- Ease of deployment
3.2 Whirlwind Tour of Go
3.2.1 Installation
Download Go and follow the installation instructions.
On OSX, you can download the
go1.9.3.darwin-amd64.pkg
package file, open it, and follow the prompts to install the Go tools. The package installs the Go distribution to/usr/local/go
.
To test your Go installation, open a new terminal and enter:
1
$
go
version
2
go
version
go1
.9.2
darwin
/
amd64
Then, add the following to your ~/.bashrc
to set your GOROOT
and GOPATH
environment variables:
1
export
GOROOT
=
/
usr
/
local
/
go
2
export
GOPATH
=
/
Users
/
<
your
.
username
>
/
gopath
3
export
PATH
=
$
PATH
:
$
GOROOT
/
bin
:
$
GOPATH
/
bin
%
1
source
~
/
.
bashrc
Note that your
GOPATH
should be the directory under which your source and third-party packages will live under.
Next, try setting up a workspace: create a directory in $GOPATH/src/learn-go/
and in that directory create a file named hello.go
.
1
$
mkdir
learn
-
go
2
$
cd
learn
-
go
3
$
touch
hello
.
go
1
// hello.go
2
3
package
main
4
5
import
"fmt"
6
7
func
main
()
{
8
fmt
.
Printf
(
"hello, world\n"
)
9
}
Run your code by calling go run hello.go
.
You can also go build
Go programs into binaries, which lets us execute the built binary directly:
1
$
go
build
hello
.
go
The command above will build an executable named hello
in the directory alongside your source code. Execute it to see the greeting:
1
$
.
/
hello
2
hello
,
world
If you see the “hello, world” message then your Go installation is working!
In the sub-sections that follow, we’ll quickly run through the basics of the Go language.
3.2.2 Types
Go is a statically-typed language; it comes with several built-in types such as strings, integers, floats, and booleans.
The types.go
program below demonstrates Go’s basic built-in types:
1
// types.go
2
3
package
main
4
5
import
"fmt"
6
7
func
main
()
{
8
9
// Strings
10
fmt
.
Println
(
"go"
+
"lang"
)
11
12
// Integers
13
fmt
.
Println
(
"1+1 ="
,
1
+
1
)
14
15
// Floats
16
fmt
.
Println
(
"7.0/3.0 ="
,
7.0
/
3.0
)
17
18
// Booleans
19
fmt
.
Println
(
true
&&
false
)
20
fmt
.
Println
(
true
||
false
)
21
fmt
.
Println
(!
true
)
22
}
The Go programs shown in this chapter is available as part of the sample code included with this book.
Running the above program from the terminal gives you the following:
1
$
go
run
types
.
go
2
1
+
1
=
2
3
7.0
/
3.0
=
2.3333333333333335
4
false
5
true
6
false
3.2.3 Variables
Variables in Go always have a specific type and that type cannot change.
You declare variables with var
, or the :=
syntax to both declare and initialize a variable.
The variables.go
program below demonstrates how to declare and initialize variables:
1
// variables.go
2
3
package
main
4
5
import
"fmt"
6
7
func
main
()
{
8
9
// Use var to declare 1 or more variables.
10
var
a
string
=
"initial"
11
fmt
.
Println
(
a
)
12
13
var
b
,
c
int
=
1
,
2
14
fmt
.
Println
(
b
,
c
)
15
16
// Go can infer a variable's type.
17
var
d
=
true
18
fmt
.
Println
(
d
)
19
20
// Types have a default zero-value, for example `int` has a `0` zero-value.
21
var
e
int
;
22
fmt
.
Println
(
e
)
23
24
// The := shorthand below is equivalent to `var f string = "short"`.
25
f
:=
"short"
26
fmt
.
Println
(
f
)
27
}
Running the above program from the terminal gives you:
1
$
go
run
variables
.
go
2
initial
3
1
2
4
true
5
0
6
short
3.2.4 Branching
3.2.4.1 If/else
Branching with if
and else
in Go is straightforward:
1
// if.go
2
3
package
main
4
5
import
"fmt"
6
7
func
main
()
{
8
// Parentheses around expression are optional
9
if
7
%
2
==
0
{
10
fmt
.
Println
(
"7 is even"
)
11
}
else
{
12
fmt
.
Println
(
"7 is odd"
)
13
}
14
15
// A statement can precede conditionals.
16
// Any variables declared in this statement are available in all branches.
17
if
num
:=
9
;
num
<
0
{
18
fmt
.
Println
(
num
,
"is negative"
)
19
}
else
if
num
<
10
{
20
fmt
.
Println
(
num
,
"has 1 digit"
)
21
}
else
{
22
fmt
.
Println
(
num
,
"has multiple digits"
)
23
}
24
}
Running the above program from the terminal gives you:
1
$
go
run
if
.
go
2
7
is
odd
3
9
has
1
digit
3.2.4.2 Switches
Switch statements express conditionals across many branches.
The switch.go
program below demonstrates different ways you can use switches to perform branching logic:
1
// switch.go
2
3
package
main
4
5
import
"fmt"
6
import
"time"
7
8
func
main
()
{
9
// A basic switch
10
i
:=
2
11
fmt
.
Print
(
"Write "
,
i
,
" as "
)
12
switch
i
{
13
case
1
:
14
fmt
.
Println
(
"one"
)
15
case
2
:
16
fmt
.
Println
(
"two"
)
17
case
3
:
18
fmt
.
Println
(
"three"
)
19
case
4
,
5
,
6
:
// Branches with multiple clauses
20
fmt
.
Println
(
"either four, five, or six"
)
21
default
:
// A wildcard branch
22
fmt
.
Println
(
"something else"
)
23
}
24
25
// Expressions can be non-constants
26
t
:=
time
.
Now
()
27
switch
{
28
case
t
.
Hour
()
<
12
:
29
fmt
.
Println
(
"It's before noon"
)
30
default
:
31
fmt
.
Println
(
"It's after noon"
)
32
}
33
34
// A type switch lets you discover the type of a value
35
whatAmI
:=
func
(
i
interface
{})
{
36
switch
t
:=
i
.(
type
)
{
37
case
bool
:
38
fmt
.
Println
(
"I'm a bool"
)
39
case
int
:
40
fmt
.
Println
(
"I'm an int"
)
41
default
:
42
fmt
.
Printf
(
"Don't know type %T\n"
,
t
)
43
}
44
}
45
whatAmI
(
true
)
46
whatAmI
(
1
)
47
whatAmI
(
"hey"
)
48
}
Running the above program from the terminal gives you:
1
$
go
run
switch
.
go
2
Write
2
as
two
3
It
'
s
after
noon
4
I
'
m
a
bool
5
I
'
m
an
int
6
Don
'
t
know
type
string
3.2.5 Data Structures
3.2.5.1 Slices
Slices in Go are dynamically sized arrays.
The slices.go
program below shows how you can initialize, read, and modify slices:
1
// slices.go
2
3
package
main
4
5
import
"fmt"
6
7
func
main
()
{
8
9
// Declare and initialize a slice
10
s
:=
make
([]
string
,
3
)
11
fmt
.
Println
(
"empty s:"
,
s
)
12
13
// Set and get
14
s
[
0
]
=
"a"
15
s
[
1
]
=
"b"
16
s
[
2
]
=
"c"
17
fmt
.
Println
(
"s:"
,
s
)
18
fmt
.
Println
(
"s[2]:"
,
s
[
2
])
19
fmt
.
Println
(
"len(s):"
,
len
(
s
))
20
21
// Append
22
s
=
append
(
s
,
"d"
)
23
s
=
append
(
s
,
"e"
,
"f"
)
24
fmt
.
Println
(
"append:"
,
s
)
25
}
Running the above program from the terminal gives you:
1
$
go
run
slices
.
go
2
empty
s
:
[
]
3
s
:
[
a
b
c
]
4
s
[
2
]:
c
5
len
(
s
):
3
6
append
:
[
a
b
c
d
e
f
]
3.2.5.2 Maps
Maps in Go are similar to hashes or dictionaries in other languages.
The maps.go
program shows how you can initialize, read, and modify maps:
1
// maps.go
2
3
package
main
4
5
import
"fmt"
6
7
func
main
()
{
8
// Create an empty map with make: `map[key-type]value-type`
9
m
:=
make
(
map
[
string
]
int
)
10
11
// Set
12
m
[
"a"
]
=
1
13
m
[
"b"
]
=
2
14
15
// Get
16
v1
:=
m
[
"a"
]
17
fmt
.
Println
(
"m: "
,
m
)
18
fmt
.
Println
(
"v1"
,
v1
)
19
20
// Removing a key-value pair
21
delete
(
m
,
"a"
)
22
fmt
.
Println
(
"delete:"
,
m
)
23
}
Running the above program from the terminal gives you:
1
$
go
run
maps
.
go
2
m
:
map
[
a
:
1
b
:
2
]
3
v1
:
1
4
delete
:
map
[
b
:
2
]
3.2.6 Loops
3.2.6.1 For
for
is Go’s only looping construct:
1
// for.go
2
3
package
main
4
5
import
"fmt"
6
7
func
main
()
{
8
i
:=
1
9
for
i
<=
3
{
10
fmt
.
Println
(
i
)
11
i
=
i
+
1
12
}
13
14
for
j
:=
7
;
j
<=
9
;
j
++
{
15
fmt
.
Println
(
j
)
16
}
17
}
Running the above program from the terminal gives you:
1
$
go
run
for
.
go
2
1
3
2
4
3
5
7
6
8
7
9
3.2.6.2 Range
You can use range
to iterate over elements in a variety of data structures.
The range.go
program demonstrates how you can iterate over slices and maps:
1
// range.go
2
3
package
main
4
5
import
"fmt"
6
7
func
main
()
{
8
// Iterating over a slice
9
nums
:=
[]
int
{
2
,
3
,
4
}
10
sum
:=
0
11
for
index
,
num
:=
range
nums
{
12
fmt
.
Println
(
"current index: "
,
index
)
13
sum
+=
num
14
}
15
fmt
.
Println
(
"sum:"
,
sum
)
16
17
// Iterating over a map's key/value pairs
18
kvs
:=
map
[
string
]
string
{
"a"
:
"apple"
,
"b"
:
"banana"
}
19
for
k
,
v
:=
range
kvs
{
20
fmt
.
Printf
(
"%s -> %s\n"
,
k
,
v
)
21
}
22
}
Running the above program from the terminal gives you:
1
$
go
run
range
.
go
2
current
index
:
0
3
current
index
:
1
4
current
index
:
2
5
sum
:
9
6
a
-
>
apple
7
b
-
>
banana
3.2.7 Functions
Functions in Go accepts parameters of specified types, and returns values of specified types.
The functions.go
program demonstrates how you can define and call functions in Go:
1
// functions.go
2
3
package
main
4
5
import
"fmt"
6
7
// This function takes two ints and returns their sum as an int.
8
func
add
(
a
int
,
b
int
)
int
{
9
return
a
+
b
10
}
11
12
func
main
()
{
13
result
:=
add
(
1
,
2
)
14
fmt
.
Println
(
"1+2 = "
,
result
)
15
}
Running the above program from the terminal gives you:
1
$
go
run
functions
.
go
2
1
+
2
=
3
3.2.8 Pointers
Pointers allow you to pass references to values within your program.
You use the *
prefix to refer to variables by reference / memory address instead of by value.
The &
prefix is used to return the memory address of a variable.
1
package
main
2
3
import
"fmt"
4
5
// zeroval has an int parameter, so arguments will be passed to it by value. zero\
6
val
will
get
a
copy
of
ival
distinct
from
the
one
in
the
calling
function
.
7
func
zeroval
(
ival
int
)
{
8
ival
=
0
9
}
10
11
// zeroptr in contrast has an *int parameter, meaning that it takes an int pointe\
12
r
.
The
*
iptr
code
in
the
function
body
then
dereferences
the
pointer
from
its
mem
\
13
ory
address
to
the
current
value
at
that
address
.
Assigning
a
value
to
a
derefere
\
14
nced
pointer
changes
the
value
at
the
referenced
address
.
15
func
zeroptr
(
iptr
*
int
)
{
16
*
iptr
=
0
17
}
18
19
func
main
()
{
20
i
:=
1
21
fmt
.
Println
(
"initial:"
,
i
)
22
23
zeroval
(
i
)
24
fmt
.
Println
(
"zeroval:"
,
i
)
25
26
// The &i syntax gives the memory address of i, i.e. a pointer to i
27
zeroptr
(
&
i
)
28
fmt
.
Println
(
"zeroptr:"
,
i
)
29
30
fmt
.
Println
(
"pointer:"
,
&
i
)
31
}
Running the above program from the terminal gives you:
1
$
go
run
pointers
.
go
2
initial
:
1
3
zeroval
:
1
4
zeroptr
:
0
5
pointer
:
0x42131100
Note that zeroval
doesn’t change the i
in main, but zeroptr
does because it has a reference to the memory address for that variable.
3.2.9 Structs
Go Structs are typed collections of fields. They are similar to classes in other languages. Structs are the primary data structure used to encapsulate business logic in Go programs.
The structs.go
program shows how you can initialize, read, and modify structs in Go:
1
// structs.go
2
3
package
main
4
5
import
"fmt"
6
7
type
Person
struct
{
8
name
string
9
age
int
10
}
11
12
func
main
()
{
13
// Create a new struct
14
fmt
.
Println
(
Person
{
name
:
"Alice"
,
age
:
21
})
15
16
// Omitted fields will be zero-valued
17
fmt
.
Println
(
Person
{
name
:
"Bob"
})
18
19
// Getters and Setters use dot notation
20
s
:=
Person
{
name
:
"Ann"
,
age
:
22
}
21
fmt
.
Println
(
s
.
name
)
22
23
s
.
age
=
30
24
fmt
.
Println
(
s
.
age
)
25
}
Running the above program from the terminal gives you:
1
$
go
run
structs
.
go
2
{
Alice
21
}
3
{
Bob
0
}
4
Ann
5
30
Go supports methods defined on struct types.
The methods.go
program demonstrates how you can use method definitions to add behaviour to structs:
1
// methods.go
2
3
package
main
4
5
import
"fmt"
6
7
type
Rect
struct
{
8
width
,
height
int
9
}
10
11
// Methods can be defined for either pointer or value receiver types. Here’s an e\
12
xample
of
a
value
receiver
.
13
func
(
r
Rect
)
perim
()
int
{
14
return
2
*
r
.
width
+
2
*
r
.
height
15
}
16
17
// This area method has a receiver type of *rect
18
// You may want to use a pointer receiver type to avoid copying on method calls o\
19
r
to
allow
the
method
to
mutate
the
receiving
struct
.
20
func
(
r
*
Rect
)
area
()
int
{
21
return
r
.
width
*
r
.
height
22
}
23
24
func
main
()
{
25
// Here we call the 2 methods defined for our struct.
26
// Go automatically handles conversion between values and pointers for method c\
27
alls
.
28
r
:=
&
Rect
{
width
:
10
,
height
:
5
}
29
fmt
.
Println
(
"area: "
,
r
.
area
())
30
fmt
.
Println
(
"perim:"
,
r
.
perim
())
31
}
Running the above program from the terminal gives you:
1
$
go
run
methods
.
go
2
area
:
50
3
perim
:
30
3.2.10 Interfaces
Go Interfaces are named collections of method signatures.
1
// interfaces.go
2
3
package
main
4
5
import
"fmt"
6
import
"math"
7
8
// Here’s a basic interface for geometric shapes.
9
type
Geometry
interface
{
10
area
()
float64
11
perim
()
float64
12
}
13
14
// For our example we’ll implement this interface on rect and circle types.
15
16
// To implement an interface in Go, we just need to implement all the methods in \
17
the
interface
.
Here
we
implement
geometry
on
rects
.
18
type
Rect
struct
{
19
width
,
height
float64
20
}
21
22
func
(
r
Rect
)
area
()
float64
{
23
return
r
.
width
*
r
.
height
24
}
25
26
func
(
r
Rect
)
perim
()
float64
{
27
return
2
*
r
.
width
+
2
*
r
.
height
28
}
29
30
// The implementation for circles.
31
type
Circle
struct
{
32
radius
float64
33
}
34
35
func
(
c
Circle
)
area
()
float64
{
36
return
math
.
Pi
*
c
.
radius
*
c
.
radius
37
}
38
39
func
(
c
Circle
)
perim
()
float64
{
40
return
2
*
math
.
Pi
*
c
.
radius
41
}
42
43
// If a variable has an interface type, then we can call methods that are in the \
44
named
interface
.
Here
’
s
a
generic
measure
function
taking
advantage
of
this
to
wo
\
45
rk
on
any
geometry
.
46
func
measure
(
g
Geometry
)
{
47
fmt
.
Println
(
g
)
48
fmt
.
Println
(
g
.
area
())
49
fmt
.
Println
(
g
.
perim
())
50
}
51
52
func
main
()
{
53
r
:=
Rect
{
width
:
3
,
height
:
4
}
54
c
:=
Circle
{
radius
:
5
}
55
56
// The circle and rect struct types both implement the geometry interface so \
57
we
can
use
instances
of
these
structs
as
arguments
to
measure
.
58
measure
(
r
)
59
measure
(
c
)
60
}
Unlike interfaces in other languages, Go interfaces are implicit rather than explicit. You don’t have to annotate a struct to say that it implements an interface. As long as a struct contains methods defined in an interface, that struct implements the interface.
In the
interfaces.go
program above, you can see how themeasure
method works for bothCircle
andRect
, because both structs contains methods defined in theGeometry
interface.
Running the above program from the terminal gives you:
1
$
go
run
interfaces
.
go
2
{
3
4
}
3
12
4
14
5
{
5
}
6
78.53981633974483
7
31.41592653589793
3.2.11 Errors
In Go it’s idiomatic to communicate errors via an explicit, separate return value.
1
package
main
2
3
import
"errors"
4
import
"fmt"
5
6
// By convention, errors are the last return value and have type error, a built-i\
7
n
interface
.
8
func
f1
(
arg
int
)
(
int
,
error
)
{
9
if
args
==
42
{
10
return
-
1
,
errors
.
New
(
"42 does not compute"
)
11
}
12
13
return
arg
+
1
,
nil
14
}
3.2.12 Packages
Nearly every program we’ve seen so far included this line:
1
import
"fmt"
fmt
is the name of a package that includes a variety of functions related to formatting and output to the screen.
Go provides packages as a mechanism for code reuse.
Create a new learn-packages/
directory in the learn-go/
folder:
1
mkdir
learn
-
packages
2
cd
learn
-
packages
Let’s create a new package called math
. Create a directory called math/
and in that directory add a new file called math.go
:
1
mkdir
math
2
touch
math
.
go
1
// $GOPATH/src/learn-go/learn-packages/math/math.go
2
3
package
math
4
5
func
Average
(
xs
[]
float64
)
float64
{
6
total
:=
float64
(
0
)
7
for
_
,
x
:=
range
xs
{
8
total
+=
x
9
}
10
return
total
/
float64
(
len
(
xs
))
11
}
In our our main.go
program, we can import and use our math
package:
1
// $GOPATH/src/learn-go/learn-packages/main.go
2
3
package
main
4
5
import
"fmt"
6
import
"learn-go/learn-packages/math"
7
8
func
main
()
{
9
xs
:=
[]
float64
{
1
,
2
,
3
,
4
}
10
avg
:=
math
.
Average
(
xs
)
11
fmt
.
Println
(
avg
)
12
}
3.2.13 Package Management
dep
is a dependency management tool for Go.
On MacOS you can install or upgrade to the latest released version with Homebrew:
1
$
brew
install
dep
2
$
brew
upgrade
dep
To get started, create a new directory learn-dep/
in your $GOPATH/src
:
1
$
mkdir
learn
-
dep
2
$
cd
learn
-
dep
Initialize the project with dep init
:
1
$
dep
init
2
$
ls
3
Gopkg
.
lock
Gopkg
.
toml
vendor
dep init
will create the following:
-
Gopkg.lock
is a record of the exact versions of all of the packages that you used for the project. -
Gopkg.toml
is a list of packages your project depends on. -
vendor/
is the directory where your project’s dependencies are installed.
3.2.13.1 Adding a new dependency
Create a main.go
file with the following contents:
1
// main.go
2
3
package
main
4
5
import
"fmt"
6
7
func
main
()
{
8
fmt
.
Println
(
"Hello world"
)
9
}
Let’s say that we want to introduce a new dependency on github.com/pkg/errors. This can be accomplished with one command:
1
$
dep
ensure
-
add
github
.
com
/
pkg
/
errors
That’s it!
For detailed usage instructions, check out the official
dep
docs.
3.3 Go on AWS Lambda
AWS released support for Go on AWS Lambda on January 2018. You can now build Go programs with typed structs representing Lambda event sources and common responses with the aws-lambda-go SDK.
Your Go programs are compiled into a statically-linked binary, bundled up into a Lambda deployment package, and uploaded to AWS Lambda.
3.4 Go Lambda Programming Model
You write code for your Lambda function in one of the languages AWS Lambda supports. Regardless of the language you choose, there is a common pattern to writing code for a Lambda function that includes the following core concepts:
- Handler – Handler is the function AWS Lambda calls to start execution of your Lambda function. Your handler should process incoming event data and may invoke any other functions/methods in your code.
- The context object – AWS Lambda also passes a context object to the handler function, which lets you retrieve metadata such as the execution time remaining before AWS Lambda terminates your Lambda function.
- Logging – Your Lambda function can contain logging statements. AWS Lambda writes these logs to CloudWatch Logs.
- Exceptions – There are different ways to end a request successfully or to notify AWS Lambda an error occurred during execution. If you invoke the function synchronously, then AWS Lambda forwards the result back to the client.
Your Lambda function code must be written in a stateless style, and have no affinity with the underlying compute infrastructure. Your code should expect local file system access, child processes, and similar artifacts to be limited to the lifetime of the request. Persistent state should be stored in Amazon S3, Amazon DynamoDB, or another cloud storage service.
3.4.1 Lambda Function Handler
A Lambda function written in Go is authored as a Go executable.
You write your handler function code by including the github.com/aws/aws-lambda-go/lambda package and a main()
function:
1
package
main
2
3
import
(
4
"fmt"
5
"context"
6
"github.com/aws/aws-lambda-go/lambda"
7
)
8
9
type
MyEvent
struct
{
10
Name
string
`json:"name"`
11
}
12
13
func
HandleRequest
(
ctx
context
.
Context
,
name
MyEvent
)
(
string
,
error
)
{
14
return
fmt
.
Sprintf
(
"Hello %s!"
,
name
.
Name
),
nil
15
}
16
17
func
main
()
{
18
lambda
.
Start
(
HandleRequest
)
19
}
Note the following:
-
package main: In Go, the package containing
func main()
must always be namedmain
. -
import: Use this to include the libraries your Lambda function requires.
- context: The Context Object.
- fmt: The Go Formatting object used to format the return value of your function.
- github.com/aws/aws-lambda-go/lambda: As mentioned previously, implements the Lambda programming model for Go.
-
func HandleRequest(ctx context.Context, name string) (string, error): This is your Lambda handler signature and includes the code which will be executed. In addition, the parameters included denote the following:
- ctx context.Context: Provides runtime information for your Lambda function invocation. ctx is the variable you declare to leverage the information available via the the Context Object.
- name string: An input type with a variable name of name whose value will be returned in the return statement.
- string error: Returns standard error information.
-
return fmt.Sprintf(“Hello %s!”, name), nil: Simply returns a formatted “Hello” greeting with the name you supplied in the handler signature.
nil
indicates there were no errors and the function executed successfully.
-
func main(): The entry point that executes your Lambda function code. This is required. By adding
lambda.Start(HandleRequest)
betweenfunc main(){}
code brackets, your Lambda function will be executed.
3.4.1.1 Using Structured Types
In the example above, the input type was a simple string. But you can also pass in structured events to your function handler:
1
package
main
2
3
import
(
4
"fmt"
5
"github.com/aws/aws-lambda-go/lambda"
6
)
7
8
type
MyEvent
struct
{
9
Name
string
'
json
:
"What is your name?"
'
10
Age
int
'
json
:
"How old are you?"
'
11
}
12
13
type
MyResponse
struct
{
14
Message
string
'
json
:
"Answer:"
'
15
}
16
17
func
HandleLambdaEvent
(
event
MyEvent
)
(
MyResponse
,
error
)
{
18
return
MyResponse
{
Message
:
fmt
.
Sprintf
(
"%s is %d years old!"
,
event
.
Name
,
\
19
event
.
Age
)},
nil
20
}
21
22
func
main
()
{
23
lambda
.
Start
(
HandleLambdaEvent
)
24
}
Your request would then look like this:
1
{
2
"What is your name?"
:
"Jim"
,
3
"How old are you?"
:
33
4
}
And the response would look like this:
1
{
2
"Answer"
:
"Jim is 33 years old!"
3
}
Each AWS event source (API Gateway, DynamoDB, etc.) has its own input/output structs.
For example, lambda functions that is triggered by API Gateway events use the events.APIGatewayProxyRequest
input struct and
events.APIGatewayProxyResponse
output struct:
1
package
main
2
3
import
(
4
"context"
5
"fmt"
6
7
"github.com/aws/aws-lambda-go/events"
8
"github.com/aws/aws-lambda-go/lambda"
9
)
10
11
func
handleRequest
(
ctx
context
.
Context
,
request
events
.
APIGatewayProxyRequest
)
(
e
\
12
vents
.
APIGatewayProxyResponse
,
error
)
{
13
fmt
.
Printf
(
"Body size = %d.\n"
,
len
(
request
.
Body
))
14
fmt
.
Println
(
"Headers:"
)
15
for
key
,
value
:=
range
request
.
Headers
{
16
fmt
.
Printf
(
" %s: %s\n"
,
key
,
value
)
17
}
18
19
return
events
.
APIGatewayProxyResponse
{
Body
:
request
.
Body
,
StatusCode
:
200
},
nil
20
}
21
22
func
main
()
{
23
lambda
.
Start
(
handleRequest
)
24
}
For more information on handling events from AWS event sources, see aws-lambda-go/events.
3.4.2 The Context Object
Lambda functions have access to metadata about their environment and the invocation request such as:
- How much time is remaining before AWS Lambda terminates your Lambda function.
- The CloudWatch log group and log stream associated with the executing Lambda function.
- The AWS request ID returned to the client that invoked the Lambda function.
- If the Lambda function is invoked through AWS Mobile SDK, you can learn more about the mobile application calling the Lambda function.
- You can also use the AWS X-Ray SDK for Go to identify critical code paths, trace their performance and capture the data for analysis.
3.4.2.1 Reading function metadata
AWS Lambda provides the above information via the context.Context
object that the service passes as a parameter to your Lambda function handler.
You need to import the github.com/aws/aws-lambda-go/lambdacontext
library to access the contents of the context.Context
object:
1
package
main
2
3
import
(
4
"context"
5
"log"
6
"github.com/aws/aws-lambda-go/lambda"
7
"github.com/aws/aws-lambda-go/lambdacontext"
8
)
9
10
func
Handler
(
ctx
context
.
Context
)
{
11
lc
,
_
:=
lambdacontext
.
FromContext
(
ctx
)
12
log
.
Print
(
lc
.
AwsRequestId
)
13
}
14
15
func
main
()
{
16
lambda
.
Start
(
Handler
)
17
}
In the example above, lc
is the variable used to consume the information that the context object captured and log.Print(lc.AwsRequestId)
prints that information, in this case, the AwsRequestId
.
3.4.3 Logging
Your Lambda function can contain logging statements. AWS Lambda writes these logs to CloudWatch.
1
package
main
2
3
import
(
4
"log"
5
"github.com/aws/aws-lambda-go/lambda"
6
)
7
8
func
HandleRequest
()
{
9
log
.
Print
(
"Hello from Lambda"
)
10
}
11
12
func
main
()
{
13
lambda
.
Start
(
HandleRequest
)
14
}
By importing the log
module, Lambda will write additional logging information such as the time stamp.
Instead of using the log
module, you can use print statements in your code as shown below:
1
package
main
2
3
import
(
4
"fmt"
5
"github.com/aws/aws-lambda-go/lambda"
6
)
7
8
func
HandleRequest
()
{
9
fmt
.
Print
(
"Hello from Lambda"
)
10
}
11
12
func
main
()
{
13
lambda
.
Start
(
HandleRequest
)
14
}
In this case only the text passed to the print method is sent to CloudWatch. The log entries will not have additional information that the log.Print
function returns.
3.4.4 Function Errors
3.4.4.1 Raising custom errors
You can create custom error handling to raise an exception directly from your Lambda function and handle it directly:
1
package
main
2
3
import
(
4
"errors"
5
"github.com/aws/aws-lambda-go/lambda"
6
)
7
8
func
OnlyErrors
()
error
{
9
return
errors
.
New
(
"something went wrong!"
)
10
}
11
12
func
main
()
{
13
lambda
.
Start
(
OnlyErrors
)
14
}
When invoked, the above function will return:
1
{
"errorMessage"
:
"something went wrong!"
}
3.4.4.2 Raising unexpected errors
Lambda functions can fail for reasons beyond your control, such as network outages. In Go, you use panic in response to unexpected errors. If your code panics, Lambda will attempt to capture the error and serialize it into the standard error json format. Lambda will also attempt to insert the value of the panic into the function’s CloudWatch logs.
1
package
main
2
3
import
(
4
"errors"
5
6
"github.com/aws/aws-lambda-go/lambda"
7
)
8
9
func
handler
(
string
)
(
string
,
error
)
{
10
panic
(
errors
.
New
(
"Something went wrong"
))
11
}
12
13
func
main
()
{
14
lambda
.
Start
(
handler
)
15
}
When invoked, the above function will return the full stack trace due to panic
:
1
{
2
"errorMessage"
:
"Something went wrong"
,
3
"errorType"
:
"errorString"
,
4
"stackTrace"
:
[
5
{
6
"path"
:
"github.com/aws/aws-lambda-go/lambda/function.go"
,
7
"line"
:
27
,
8
"label"
:
"(*Function).Invoke.function"
9
},
10
...
11
12
]
13
}
3.4.5 Environment Variables
Use the os.Getenv
function to read environment variables:
1
package
main
2
3
import
(
4
"fmt"
5
"os"
6
"github.com/aws/aws-lambda-go/lambda"
7
)
8
9
func
main
()
{
10
fmt
.
Printf
(
"Lambda is in the %s region and is on %s"
,
os
.
Getenv
(
"AWS_REGION"
),
o
\
11
s
.
Getenv
(
"AWS_EXECUTION_ENV"
))
12
13
}
Lambda configures a list of environment variables by default.
3.5 Summary
The Go we’ve covered so far is more than enough to get you started with building Go applications of AWS Lambda.
3.5.1 Further Learning
To learn more about the Go language, be sure to chec kout the following learning resources:
4. Building a CRUD API
In this chapter, you will build as simple CRUD (Create-Read-Update-Delete) API using Go and AWS Lambda. Each CRUD action will be handled by a serverless function. The final application has some compelling qualities:
- Less Ops: No servers to provision. Faster development.
- Infinitely Scalable: AWS Lambda will invoke your Functions for each incoming request.
- Zero Downtime: AWS Lambda will ensure your service is always up.
- Cheap: You don’t need to provision a large server instance 24 / 7 to handle traffic peaks. You only pay for real usage.
4.1 Prerequisites
Before we continue, make sure that you have:
- Go and
serverless
installed on your machine. - Your AWS account set up.
Follow the steps in Chapter 2 to set up your development environment, if you haven’t already.
4.2 Background Information
Web applications often requires more than a pure functional transformation of inputs. You need to capture stateful information such as user or application data and user generated content (images, documents, and so on.)
However, serverless Functions are stateless. After a Function is executed none of the in-process state will be available to subsequent invocations. To store state, you need to provision Resources that communicate with our Functions.
On top of AWS Lambda, you will need to use the following AWS services to capture state:
- AWS DynamoDB: Managed NoSQL database to store image data.
- AWS API Gateway: HTTP API Interface to our functions.
The subsections that follow briefly explain what each AWS service does.
4.2.1 Amazon DynamoDB
Amazon DynamoDB is a fully managed NoSQL cloud database and supports both document and key-value store models.

With DynamoDB, you can create database tables that can store and retrieve any amount of data, and serve any level of request traffic. You can scale up or scale down your tables’ throughput capacity without downtime or performance degradation, and use the AWS Management Console to monitor resource utilization and performance metrics.
Our CRUD API uses DynamoDB as to store all user-generated data in our application.
4.2.2 Amazon API Gateway
Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale.

With a few clicks in the AWS Management Console, you can create an API that acts as a “front door” for applications to access data, business logic, or functionality from your back-end services, such as workloads running on Amazon Elastic Compute Cloud (Amazon EC2), code running on AWS Lambda, or any Web application.
Our CRUD API uses API Gateway to allow our Functions to be triggered via HTTP.
4.3 Design
4.3.1 Problem Decomposition
For each endpoint in our backend’s HTTP API, you can create a Function that corresponds to an action. For example:
1
`GET /todos`
-
>
`listTodos`
2
3
`POST /todos`
-
>
`addTodo`
4
5
`PATCH /todos/{id}`
-
>
`completeTodo`
6
7
`DELETE /todos/{id}`
-
>
`deleteTodo`
The listTodos
Function returns all of our todos, addTodo
adds a new row to our todos table, and so on.
When designing Functions, keep the Single Responsibility Principle in mind.

Remember: Events trigger Functions which communicate with Resources. In this project, our Functions will be triggered by HTTP and communicate with a DynamoDB table.
4.4 Development
4.4.1 Example Application Setup
Check out the serverless-crud-go
sample application included as part of this book. This example application will serve as a handy reference as you build your own.
In your terminal, do:
1
cd
serverless
-
crud
-
go
2
.
/
scripts
/
build
.
sh
3
serverless
deploy
Running the build.sh
script will call the go build
command to create statically-linked binaries in the bin/
sub-directory of your project.
Here is the build script in detail:
1
#
!
/
usr
/
bin
/
env
bash
2
3
echo
"Compiling functions to bin/handlers/ ..."
4
5
rm
-
rf
bin
/
6
7
cd
src
/
handlers
/
8
for
f
in
*
.
go
;
do
9
filename
=
"${f%.go}"
10
if
GOOS
=
linux
go
build
-
o
"../../bin/handlers/$filename"
$
{
f
};
then
11
echo
"✓ Compiled $filename"
12
else
13
echo
"✕ Failed to compile $filename!"
14
exit
1
15
fi
16
done
17
18
echo
"Done."
4.4.1.1 Step 0: Set up boilerplate
As part of the sample code included in this book, you have a serverless-boilerplate-go
template project you can copy to quickly get started.
Copy the entire project folder to your $GOPATH/src
and rename the directory and to your own project name.
Remember to update the project’s name in serverless.yml
to your own project name!
The serverless-boilerplate-go
project has this structure:
1
.
2
+--
scripts
/
3
+--
src
/
4
+--
handlers
/
5
+--
.
gitignore
6
+--
README
.
md
7
+--
Gopkg
.
toml
8
+--
serverless
.
yml
Within this boilerplate, we have the following:
-
scripts
contains abuild.sh
script that you can use to compile binaries for the lambda deployment package. -
src/handlers/
is where your handler functions will live. -
Gokpkg.toml
is used for Go dependency management with thedep
tool. -
serverless.yml
is a Serverless project configuration file. -
README.md
contains step-by-step setup instructions.
In your terminal, navigate to your project’s root directory and install the dependencies defined in the boilerplate:
1
cd
<
your
-
project
-
name
>
2
dep
ensure
With that set up, let’s get started with building our CRUD API!
4.4.1.2 Step 1: Create the POST /todos
endpoint
4.4.1.2.1 Event
First, define the addTodo
Function’s HTTP Event trigger in serverless.yml
:
1
// serverless.yml
2
3
package
:
4
individually
:
true
5
exclude
:
6
-
.
/**
7
8
functions
:
9
addTodo
:
10
handler
:
bin
/
handlers
/
addTodo
11
package
:
12
include
:
13
-
.
/
bin
/
handlers
/
addTodo
14
events
:
15
-
http
:
16
path
:
todos
17
method
:
post
18
cors
:
true
In the above configuration, notice two things:
- Within the
package
block, we tell the Serverless framework to only package the compiled binaries inbin/handlers
and exclude everything else. - The
addTodo
function has an HTTP event trigger set to thePOST /todos
endpoint.
4.4.1.2.2 Function
Create a new file within the src/handlers/
directory called addTodo.go
:
1
// src/handlers/addTodo.go
2
3
package
main
4
5
import
(
6
"context"
7
"fmt"
8
"os"
9
"time"
10
"encoding/json"
11
12
"github.com/aws/aws-lambda-go/lambda"
13
"github.com/aws/aws-lambda-go/events"
14
"github.com/aws/aws-sdk-go/aws"
15
"github.com/aws/aws-sdk-go/aws/session"
16
"github.com/aws/aws-sdk-go/service/dynamodb"
17
"github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute"
18
19
"github.com/satori/go.uuid"
20
)
21
22
type
Todo
struct
{
23
ID
string
`json:"id"`
24
Description
string
`json:"description"`
25
Done
bool
`json:"done"`
26
CreatedAt
string
`json:"created_at"`
27
}
28
29
var
ddb
*
dynamodb
.
DynamoDB
30
func
init
()
{
31
region
:=
os
.
Getenv
(
"AWS_REGION"
)
32
if
session
,
err
:=
session
.
NewSession
(
&
aws
.
Config
{
// Use aws sdk to connect to \
33
dynamoDB
34
Region
:
&
region
,
35
});
err
!=
nil
{
36
fmt
.
Println
(
fmt
.
Sprintf
(
"Failed to connect to AWS: %s"
,
err
.
Error
()))
37
}
else
{
38
ddb
=
dynamodb
.
New
(
session
)
// Create DynamoDB client
39
}
40
}
41
42
func
AddTodo
(
ctx
context
.
Context
,
request
events
.
APIGatewayProxyRequest
)
(
events
.
\
43
APIGatewayProxyResponse
,
error
)
{
44
fmt
.
Println
(
"AddTodo"
)
45
46
var
(
47
id
=
uuid
.
Must
(
uuid
.
NewV4
(),
nil
).
String
()
48
tableName
=
aws
.
String
(
os
.
Getenv
(
"TODOS_TABLE_NAME"
))
49
)
50
51
// Initialize todo
52
todo
:=
&
Todo
{
53
ID
:
id
,
54
Done
:
false
,
55
CreatedAt
:
time
.
Now
().
String
(),
56
}
57
58
// Parse request body
59
json
.
Unmarshal
([]
byte
(
request
.
Body
),
todo
)
60
61
// Write to DynamoDB
62
item
,
_
:=
dynamodbattribute
.
MarshalMap
(
todo
)
63
input
:=
&
dynamodb
.
PutItemInput
{
64
Item
:
item
,
65
TableName
:
tableName
,
66
}
67
if
_
,
err
:=
ddb
.
PutItem
(
input
);
err
!=
nil
{
68
return
events
.
APIGatewayProxyResponse
{
// Error HTTP response
69
Body
:
err
.
Error
(),
70
StatusCode
:
500
,
71
},
nil
72
}
else
{
73
body
,
_
:=
json
.
Marshal
(
todo
)
74
return
events
.
APIGatewayProxyResponse
{
// Success HTTP response
75
Body
:
string
(
body
),
76
StatusCode
:
200
,
77
},
nil
78
}
79
}
80
81
func
main
()
{
82
lambda
.
Start
(
AddTodo
)
83
}
In the above handler function:
- In the
init()
function, we perform some initialization logic: making a database connection to DynamoDB.init()
is automatically called beforemain()
. - The
addTodo
handler function parses the request body for astring
description. - Then, it calls
ddb.PutItem
with an environment variableTODOS_TABLE_NAME
to insert a new row to our DynamoDB table. - Finally, it returns an HTTP success or error response back to the client.
4.4.1.2.3 Resource
Our handler function stores data in a DynamoDB table. Let’s define this table resource in the serverless.yml
:
1
#
serverless
.
yml
2
3
custom
:
4
todosTableName
:
$
{
self
:
service
}
-
$
{
self
:
provider
.
stage
}
-
todos
5
todosTableArn
:
#
ARNs
are
addresses
of
deployed
services
in
AWS
space
6
Fn
::
Join
:
7
-
":"
8
-
-
arn
9
-
aws
10
-
dynamodb
11
-
Ref
:
AWS
::
Region
12
-
Ref
:
AWS
::
AccountId
13
-
table
/
$
{
self
:
custom
.
todosTableName
}
14
15
provider
:
16
...
17
environment
:
18
TODOS_TABLE_NAME
:
$
{
self
:
custom
.
todosTableName
}
19
iamRoleStatements
:
#
Defines
what
other
AWS
services
our
lambda
functions
can
a
\
20
ccess
21
-
Effect
:
Allow
#
Allow
access
to
DynamoDB
tables
22
Action
:
23
-
dynamodb
:
Scan
24
-
dynamodb
:
GetItem
25
-
dynamodb
:
PutItem
26
-
dynamodb
:
UpdateItem
27
-
dynamodb
:
DeleteItem
28
Resource
:
29
-
$
{
self
:
custom
.
todosTableArn
}
30
31
resources
:
32
Resources
:
#
Supporting
AWS
services
33
TodosTable
:
#
Define
a
new
DynamoDB
Table
resource
to
store
todo
items
34
Type
:
AWS
::
DynamoDB
::
Table
35
Properties
:
36
TableName
:
$
{
self
:
custom
.
todosTableName
}
37
ProvisionedThroughput
:
38
ReadCapacityUnits
:
1
39
WriteCapacityUnits
:
1
40
AttributeDefinitions
:
41
-
AttributeName
:
id
42
AttributeType
:
S
43
KeySchema
:
44
-
AttributeName
:
id
45
KeyType
:
HASH
In the resources
block, we define a new AWS::DynamoDB::Table
resource using AWS CloudFormation.
We then make the provisioned table’s name available to our handler function by exposing it as an environment variable in the
provider.environment
block.
To give our functions access to AWS resources, we also define some IAM role statements that allow our functions to perform certain actions such as dynamodb:PutItem
to our table resource.
4.4.1.2.4 Summary
Run ./scripts/build.sh
and serverless deploy
. If everything goes well, you will receive an HTTP endpoint url that you can use to trigger your Lambda function.
Verify your function by making an HTTP POST request to the URL with the following body:
1
{
2
"description"
:
"Hello world"
3
}
If everything goes well, you will receive a success 201
HTTP response and be able to see a new row in your AWS DynamoDB table via the AWS console.
4.4.1.3 Step 2: Create the GET /todos
endpoint
4.4.1.3.1 Event
First, define the listTodos
Function’s HTTP Event trigger in serverless.yml
:
1
// serverless.yml
2
3
functions
:
4
listTodos
:
5
handler
:
bin
/
handlers
/
listTodos
6
package
:
7
include
:
8
-
.
/
bin
/
handlers
/
listTodos
9
events
:
10
-
http
:
11
path
:
todos
12
method
:
get
13
cors
:
true
4.4.1.3.2 Function
Create a new file within the src/handlers/
directory called listTodos.go
:
1
// src/handlers/listTodos.go
2
3
package
main
4
5
import
(
6
"context"
7
"fmt"
8
"encoding/json"
9
"os"
10
11
"github.com/aws/aws-lambda-go/lambda"
12
"github.com/aws/aws-lambda-go/events"
13
"github.com/aws/aws-sdk-go/aws"
14
"github.com/aws/aws-sdk-go/aws/session"
15
"github.com/aws/aws-sdk-go/service/dynamodb"
16
"github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute"
17
)
18
19
type
Todo
struct
{
20
ID
string
`json:"id"`
21
Description
string
`json:"description"`
22
Done
bool
`json:"done"`
23
CreatedAt
string
`json:"created_at"`
24
}
25
26
type
ListTodosResponse
struct
{
27
Todos
[]
Todo
`json:"todos"`
28
}
29
30
var
ddb
*
dynamodb
.
DynamoDB
31
func
init
()
{
32
region
:=
os
.
Getenv
(
"AWS_REGION"
)
33
if
session
,
err
:=
session
.
NewSession
(
&
aws
.
Config
{
// Use aws sdk to connect to \
34
dynamoDB
35
Region
:
&
region
,
36
});
err
!=
nil
{
37
fmt
.
Println
(
fmt
.
Sprintf
(
"Failed to connect to AWS: %s"
,
err
.
Error
()))
38
}
else
{
39
ddb
=
dynamodb
.
New
(
session
)
// Create DynamoDB client
40
}
41
}
42
43
func
ListTodos
(
ctx
context
.
Context
,
request
events
.
APIGatewayProxyRequest
)
(
event
\
44
s
.
APIGatewayProxyResponse
,
error
)
{
45
fmt
.
Println
(
"ListTodos"
)
46
47
var
(
48
tableName
=
aws
.
String
(
os
.
Getenv
(
"TODOS_TABLE_NAME"
))
49
)
50
51
// Read from DynamoDB
52
input
:=
&
dynamodb
.
ScanInput
{
53
TableName
:
tableName
,
54
}
55
result
,
_
:=
ddb
.
Scan
(
input
)
56
57
// Construct todos from response
58
var
todos
[]
Todo
59
for
_
,
i
:=
range
result
.
Items
{
60
todo
:=
Todo
{}
61
if
err
:=
dynamodbattribute
.
UnmarshalMap
(
i
,
&
todo
);
err
!=
nil
{
62
fmt
.
Println
(
"Failed to unmarshal"
)
63
fmt
.
Println
(
err
)
64
}
65
todos
=
append
(
todos
,
todo
)
66
}
67
68
// Success HTTP response
69
body
,
_
:=
json
.
Marshal
(
&
ListTodosResponse
{
70
Todos
:
todos
,
71
})
72
return
events
.
APIGatewayProxyResponse
{
73
Body
:
string
(
body
),
74
StatusCode
:
200
,
75
},
nil
76
}
77
78
func
main
()
{
79
lambda
.
Start
(
ListTodos
)
80
}
In the above handler function:
- First, you retrieve the
tableName
from environment variables. - Then, you call
ddb.Scan
to retrieve rows from the todos DB table. - Finally, you return a success or error HTTP response depending on the outcome.
4.4.1.3.3 Summary
Run ./scripts/build.sh
and serverless deploy
. You will receive an HTTP endpoint url that you can use to trigger your Lambda function.
Verify your function by making an HTTP GET request to the URL.
If everything goes well, you will receive a success 200
HTTP response and see a list of todo JSON objects:
1
>
curl
https
:
//<hash>.execute-api.<region>.amazonaws.com/dev/todos
2
{
3
"todos"
:
[
4
{
5
"id"
:
"d3e38e20-5e73-4e24-9390-2747cf5d19b5"
,
6
"description"
:
"buy fruits"
,
7
"done"
:
false
,
8
"created_at"
:
"2018-01-23 08:48:21.211887436 +0000 UTC m=+0.045616262"
9
},
10
{
11
"id"
:
"1b580cc9-a5fa-4d29-b122-d20274537707"
,
12
"description"
:
"go for a run"
,
13
"done"
:
false
,
14
"created_at"
:
"2018-01-23 10:30:25.230758674 +0000 UTC m=+0.050585237"
15
}
16
]
17
}
4.4.1.4 Step 3: Create the PATCH /todos/{id}
endpoint
4.4.1.4.1 Event
First, define the completeTodo
Function’s HTTP Event trigger in serverless.yml
:
1
// serverless.yml
2
3
functions
:
4
completeTodo
:
5
handler
:
bin
/
handlers
/
completeTodo
6
package
:
7
include
:
8
-
.
/
bin
/
handlers
/
completeTodo
9
events
:
10
-
http
:
11
path
:
todos
12
method
:
patch
13
cors
:
true
4.4.1.4.2 Function
Create a new file within the src/handlers/
directory called completeTodo.go
:
1
package
main
2
3
import
(
4
"fmt"
5
"context"
6
"os"
7
"github.com/aws/aws-lambda-go/lambda"
8
"github.com/aws/aws-lambda-go/events"
9
"github.com/aws/aws-sdk-go/aws/session"
10
"github.com/aws/aws-sdk-go/service/dynamodb"
11
"github.com/aws/aws-sdk-go/aws"
12
)
13
14
var
ddb
*
dynamodb
.
DynamoDB
15
func
init
()
{
16
region
:=
os
.
Getenv
(
"AWS_REGION"
)
17
if
session
,
err
:=
session
.
NewSession
(
&
aws
.
Config
{
// Use aws sdk to connect to \
18
dynamoDB
19
Region
:
&
region
,
20
});
err
!=
nil
{
21
fmt
.
Println
(
fmt
.
Sprintf
(
"Failed to connect to AWS: %s"
,
err
.
Error
()))
22
}
else
{
23
ddb
=
dynamodb
.
New
(
session
)
// Create DynamoDB client
24
}
25
}
26
27
28
func
CompleteTodo
(
ctx
context
.
Context
,
request
events
.
APIGatewayProxyRequest
)
(
ev
\
29
ents
.
APIGatewayProxyResponse
,
error
)
{
30
fmt
.
Println
(
"CompleteTodo"
)
31
32
// Parse id from request body
33
var
(
34
id
=
request
.
PathParameters
[
"id"
]
35
tableName
=
aws
.
String
(
os
.
Getenv
(
"TODOS_TABLE_NAME"
))
36
done
=
"done"
37
)
38
39
// Update row
40
input
:=
&
dynamodb
.
UpdateItemInput
{
41
Key
:
map
[
string
]
*
dynamodb
.
AttributeValue
{
42
"id"
:
{
43
S
:
aws
.
String
(
id
),
44
},
45
},
46
UpdateExpression
:
aws
.
String
(
"set #d = :d"
),
47
ExpressionAttributeNames
:
map
[
string
]
*
string
{
48
"#d"
:
&
done
,
49
},
50
ExpressionAttributeValues
:
map
[
string
]
*
dynamodb
.
AttributeValue
{
51
":d"
:
{
52
BOOL
:
aws
.
Bool
(
true
),
53
},
54
},
55
ReturnValues
:
aws
.
String
(
"UPDATED_NEW"
),
56
TableName
:
tableName
,
57
}
58
_
,
err
:=
ddb
.
UpdateItem
(
input
)
59
60
if
err
!=
nil
{
61
return
events
.
APIGatewayProxyResponse
{
// Error HTTP response
62
Body
:
err
.
Error
(),
63
StatusCode
:
500
,
64
},
nil
65
}
else
{
66
return
events
.
APIGatewayProxyResponse
{
// Success HTTP response
67
Body
:
request
.
Body
,
68
StatusCode
:
200
,
69
},
nil
70
}
71
}
72
73
func
main
()
{
74
lambda
.
Start
(
CompleteTodo
)
75
}
In the above handler function:
- First, you retrieve
id
from the request’s path parameters, andtableName
from environment variables. - Then, you call
ddb.UpdateItem
with bothid
,tableName
, andUpdateExpression
that sets the todo’sdone
column totrue
. - Finally, you return a success or error HTTP response depending on the outcome.
4.4.1.4.3 Summary
Run ./scripts/build.sh
and serverless deploy
. You will receive an HTTP PATCH endpoint url that you can use to trigger the completeTodo
Lambda function.
Verify your function by making an HTTP PATCH request to the /todos/{id}
url, passing in a todo ID.
You should see that the todo item’s done
status is updated from false
to true
.
4.4.1.5 Step 4: Create the DELETE /todos/{id}
endpoint
4.4.1.5.1 Event
First, define the deleteTodo
Function’s HTTP Event trigger in serverless.yml
:
1
// serverless.yml
2
3
functions
:
4
deleteTodo
:
5
handler
:
bin
/
handlers
/
deleteTodo
6
package
:
7
include
:
8
-
.
/
bin
/
handlers
/
deleteTodo
9
events
:
10
-
http
:
11
path
:
todos
12
method
:
delete
13
cors
:
true
4.4.1.5.2 Function
Create a new file within the src/handlers/
directory called deleteTodo.go
:
1
package
main
2
3
import
(
4
"fmt"
5
"context"
6
"os"
7
"github.com/aws/aws-lambda-go/lambda"
8
"github.com/aws/aws-lambda-go/events"
9
"github.com/aws/aws-sdk-go/aws/session"
10
"github.com/aws/aws-sdk-go/service/dynamodb"
11
"github.com/aws/aws-sdk-go/aws"
12
)
13
14
var
ddb
*
dynamodb
.
DynamoDB
15
func
init
()
{
16
region
:=
os
.
Getenv
(
"AWS_REGION"
)
17
if
session
,
err
:=
session
.
NewSession
(
&
aws
.
Config
{
// Use aws sdk to connect to \
18
dynamoDB
19
Region
:
&
region
,
20
});
err
!=
nil
{
21
fmt
.
Println
(
fmt
.
Sprintf
(
"Failed to connect to AWS: %s"
,
err
.
Error
()))
22
}
else
{
23
ddb
=
dynamodb
.
New
(
session
)
// Create DynamoDB client
24
}
25
}
26
27
28
func
DeleteTodo
(
ctx
context
.
Context
,
request
events
.
APIGatewayProxyRequest
)
(
even
\
29
ts
.
APIGatewayProxyResponse
,
error
)
{
30
fmt
.
Println
(
"DeleteTodo"
)
31
32
// Parse id from request body
33
var
(
34
id
=
request
.
PathParameters
[
"id"
]
35
tableName
=
aws
.
String
(
os
.
Getenv
(
"TODOS_TABLE_NAME"
))
36
)
37
38
// Delete todo
39
input
:=
&
dynamodb
.
DeleteItemInput
{
40
Key
:
map
[
string
]
*
dynamodb
.
AttributeValue
{
41
"id"
:
{
42
S
:
aws
.
String
(
id
),
43
},
44
},
45
TableName
:
tableName
,
46
}
47
_
,
err
:=
ddb
.
DeleteItem
(
input
)
48
49
if
err
!=
nil
{
50
return
events
.
APIGatewayProxyResponse
{
// Error HTTP response
51
Body
:
err
.
Error
(),
52
StatusCode
:
500
,
53
},
nil
54
}
else
{
55
return
events
.
APIGatewayProxyResponse
{
// Success HTTP response
56
StatusCode
:
204
,
57
},
nil
58
}
59
}
60
61
func
main
()
{
62
lambda
.
Start
(
DeleteTodo
)
63
}
In the above handler function:
- First, you retrieve
id
from the request’s path parameters, andtableName
from environment variables. - Then, you call
ddb.DeleteItem
with bothid
andtableName
. - Finally, you return a success or error HTTP response depending on the outcome.
4.4.1.5.3 Summary
Run ./scripts/build.sh
and serverless deploy
. You will receive an HTTP DELETE endpoint url that you can use to trigger the completeTodo
Lambda function.
Verify your function by making an HTTP DELETE request to the /todos/{id}
url, passing in a todo ID.
You should see that the todo item is deleted from your DB table.
4.4.1.6 Writing Unit Tests
Going Serverless makes your infrastructure more resilient, decreasing the likelihood that your servers fail. However, your application can still fail due to bugs and errors in business logic. Having unit tests gives you the confidence that both your infrastructure and code is behaving as expected.
Most of your Functions makes external API calls to AWS cloud services such as DynamoDB. In our unit tests we want to avoid making any network calls - they should be able to run locally. Unit tests should not be dependent on live infrastructure where possible.
In Go, we use the testify
package to write unit tests. For example:
1
package
main_test
2
3
import
(
4
"testing"
5
main
"github.com/aws-samples/lambda-go-samples"
6
"github.com/aws/aws-lambda-go/events"
7
"github.com/stretchr/testify/assert"
8
)
9
10
func
TestHandler
(
t
*
testing
.
T
)
{
11
tests
:=
[]
struct
{
12
request
events
.
APIGatewayProxyRequest
13
expect
string
14
err
error
15
}{
16
{
17
// Test that the handler responds with the correct response
18
// when a valid name is provided in the HTTP body
19
request
:
events
.
APIGatewayProxyRequest
{
Body
:
"Paul"
},
20
expect
:
"Hello Paul"
,
21
err
:
nil
,
22
},
23
{
24
// Test that the handler responds ErrNameNotProvided
25
// when no name is provided in the HTTP body
26
request
:
events
.
APIGatewayProxyRequest
{
Body
:
""
},
27
expect
:
""
,
28
err
:
main
.
ErrNameNotProvided
,
29
},
30
}
31
32
for
_
,
test
:=
range
tests
{
33
response
,
err
:=
main
.
Handler
(
test
.
request
)
34
assert
.
IsType
(
t
,
test
.
err
,
err
)
35
assert
.
Equal
(
t
,
test
.
expect
,
response
.
Body
)
36
}
37
}
4.5 Summary
Congratulations! In this chapter, you learned how to design and develop an API as a set of single-purpose functions, events, and resources.
5. Where to go from here
Congratulations! You’ve gone serverless. Over the course of this book, you’ve built a CRUD Go application on AWS Lambda and the Serverless framework.
However, the Serverless paradigm is still in its infancy and continues to evolve. New serverless platforms, frameworks, and abstractions will emerge within the next couple of years.
Here are some additional resources that I recommend for you to continue your serverless journey:
- The official Serverless blog announces regular updates as well as showcases of how other developers are going serverless.
- The Serverless forum is home to many discussions relating to the Serverless Framework & serverless architectures.
- Attend a Serverless meetup in your area to meet other developers going serverless!
- Check out what the CNCF (Cloud Native Computing Foundation) Serverless working group is exploring.
Thank you for reading Serverless Go: A Practical Guide and I wish you the best in your future endeavours.
Feel free to send in your questions, comments, issues, and suggestions for Going Serverless here.
6. Glossary
6.0.0.0.1 API
Application Programming Interface. A set of functions and procedures that allow the creation of applications which access the features or data of an operating system, application, or other service.
6.0.0.0.2 AWS
Amazon Web Services. A subsidiary of Amazon.com that provides on-demand cloud computing platforms. AWS has more than 70 services, spanning a wide range, including compute, storage, networking, database, analytics, application services, deployment, management, mobile, developer tools and tools for the Internet of Things.
6.0.0.0.3 CLI
Command Line Interface. A means of interacting with a computer program where the user (or client) issues commands to the program in the form of successive lines of text (command lines).
6.0.0.0.4 CNCF
Cloud Native Computing Foundation. A foundation that fosters a community around open source projects that orchestrate containers as part of a microservices architecture.
6.0.0.0.5 DSL
Domain Specific Language. A computer language specialized to a particular application domain.
6.0.0.0.6 FaaS
Functions as a Service. A category of cloud computing services that provides a platform allowing customers to develop, run, and manage application functionalities without the complexity of building and maintaining infrastructure.
6.0.0.0.7 PaaS
Platform as a Service. A category of cloud computing services that provides a platform allowing customers to develop, run, and manage applications without the complexity of building and maintaining infrastructure.
6.0.0.0.8 HTTP
Hyper-Text Transfer Protocol. A communication protocol for systems on the World Wide Web.
6.0.0.0.9 JSON
JavaScript Object Notation. A human readable data-interchange format based on JavaScript.
6.0.0.0.10 S3
Simple Storage Service. Cloud file storage service from AWS.
6.0.0.0.11 SNS
Simple Notification Service. A fully managed pub/sub messaging service from AWS.