Sharing Code among Lambdas using Lambda Layers


Lambda layers allow us to share code among lambda functions. We just have to upload the layer once and reference it in any lambda function.

The code in a layer could be anything. It could be dependencies, configurations, helper functions i.e. Logging, Recording metrics, etc…

We can link up-to five lambda layers per function where one of that can optionally be a custom runtime e.g. PHP, rust, etc.. However adding your custom runtime is not the most frequent use-case for lambda layers but sharing code across your serverless microservices.


Lambda layers were announced at the reInvent of 2018. The main purpose of a lambda layer is to avoid duplicate codes across many lambdas thereby promote separation of concerns design principle.

Before lambda layers, developers used to either duplicate common code in every lambda function or create local npm packages and refer them in lambdas. Now with lambda layers, you can securely share code among your lambda functions in the same AWS account, cross-accounts or in public.


You can use the AWS console directly to deploy lambda layers. However, it is not recommended for production applications. Most people use either Serverless Framework or AWS SAM CLI (Serverless Application Model) to deploy and manage lambda layers.

Here is an example using the Serverless Framework.

Step 01 — Install serverless framework

Open a terminal or a command prompt and create a new folder called “backend”.

mkdir backend

Open the folder in VS Code.

Let’s install the serverless framework globally using npm. Lambda layer support is added to newer versions of the serverless framework.

npm install -g serverless

Step 02 — Create serverless services

Now let’s create three serverless services i.e. layers, todos, and users. Inside the backend folder run following commands in serverless to create above nodejs services.

Layer service

serverless create --template aws-nodejs --path layers

Todo service

serverless create --template aws-nodejs --path todos

User service

serverless create --template aws-nodejs --path users

Step 03 — Creating a lambda layer

Open layers folder and select the serverless.yml file. Replace the file content with the following configuration.

service: layers
name: aws
runtime: nodejs12.x
path: logging

In the above configuration, we reference logging layer with the path logging. Let’s now add a folder called logging in the layers folder.

mkdir logging

Since we are using nodejs runtime, we have to create a certain folder structure in the logging folder, so that other lambda functions can access the logging layer code.

NodeJS path — nodejs/node_modules/<module_name>/<files>

So, I’ve created the above folders as shown in the picture below.


Below is the content inside index.js of the logging module. For testing purposes, let’s return a simple text.

module.exports.log = () => {
return 'Logging from layer';

Step 04 — Deploying the lambda layer

Now that we have created the sample logging layer, let’s go ahead and deploy it into AWS.

Make sure that you have correctly configured the access keys and secret access keys with the serverless framework. If you haven’t done so, use the below command.

serverless config credentials --provider aws --key <access_key> 
--secret <secret_key>

Deploying the layer

Run the following command inside the layers folder where the serverless.yml file resides.

serverless deploy --stage dev

The above command will deploy the layer into the given stage (dev) in AWS.


Copy the arn of the lambda layer version. We will add this arn to todos and users microservices so that they can reference the lambda layer.

Step 05 — Using the layer in other lambdas

Let’s use the layer we have created in step 04 in todos and users service. We need to edit the serverless.yml file in both the services to reference the layer version.

todos — serverless.yml

service: todos
name: aws
runtime: nodejs12.x
handler: handler.todos
- arn:aws:lambda:us-east-1:885121665536:layer:logging:9
- http:
path: todos
method: get

users — servereless.yml

service: users
name: aws
runtime: nodejs12.x
handler: handler.users
- arn:aws:lambda:us-east-1:885121665536:layer:logging:9
- http:
path: todos
method: get

Usage in the handler function

We have configured the servereless.yml configuration files of the two services above so that our lambdas have access to the logging layer.

How can we reference the log() function in a lambda handler function?


'use strict';
const logging = require('logging');
module.exports.todos = async (event) => {
return {
statusCode: 200,
body: JSON.stringify({
message: logging.log()
}, null, 2)

It’s as simple as referencing a usual npm dependency using the require keyword.

Now we can invoke log() function in the logging module.

Note — Deploy both todos and users services using ‘serverelss deploy’ and then you can see the logging messages by accessing API gateway endpoints. The endpoints can be found in the output log of serverless deploy command.


Here are some concerns about lambda layers worth noting.

  • A lambda function can reference only up to 5 layers
  • If you have multiple lambda layers, the order is important as the lambda layers depend upon each other. (This is useful when adding a custom runtime layer)
  • Layers are immutable and can be versioned to manage updates
  • You can share layers securely (Using IAM) within your AWS account, cross-accounts and even in public


A lambda layer is just a blob of data in a zip file. The uncompressed size of a lambda layer must be less than 250MB. This is a limit set by AWS.

Please follow and like us:

AWS Encryption SDK

This is part 03 of Data Encryption on the AWS series. You may find the related video for this blog post here.

In the previous blog post, we discussed how to use OpenSSL with AWS KMS in order to encrypt/decrypt sensitive data. You can find the previous blog post here. Today let us focus on AWS Encryption SDK with an example.

What is AWS Encryption SDK?

The AWS Encryption SDK is a client-side encryption library designed to make it easy for everyone to encrypt and decrypt data using industry standards and best practices. It enables you to focus on the core functionality of your application, rather than on how to best encrypt and decrypt your data.

Source — AWS Documentation

AWS Encryption SDK only requires us to provide it a master key(s) and then we can use straightforward methods to encrypt or decrypt our data at the client-side.

You don’t have to worry about which encryption algorithm to use, how to maintain data-keys for the given master keys and how to ensure the data has not tampered between the time it is written and when it is read. AWS Encryption SDK handles everything for you. That makes our lives very easy!

At this point in time, AWS Encryption SDK supports JavaScript, C, Java and Python languages. We use JavaScript SDK for the example below.

When to use AWS Encryption SDK?

You can use Encryption SDK for Client-side encryption in browsers and other clients (e.g. EC2 instance that process data before storing in S3/Database). It is quite useful to encrypt data in distributed systems that communicate with different microservices and third party services.

However, AWS offers two other encryption clients for client-side encryption of data.

  1. DynamoDB Encryption Client
  2. S3 Encryption Client

AWS Encryption SDK and above encryption clients are not compatible. You cannot decrypt data that is encrypted by DynamoDB Encryption Client using AWS Encryption SDK. AWS recommends using above encryption clients if you are specifically working with DynamoDB or S3 as they provide additional functionalities suitable for DynamoDB or S3. For example, DynamoDB Encryption Client preserves partition/sort keys and encrypt only other data attributes.

AWS Encryption SDK can be used for general encryption workloads.

How to use AWS Encryption SDK?

AWS Encryption SDK provides SDKs for various programming languages. If you JavaScript SDK to encrypt data in the browser, you can use any other SDKs (e.g. Java, NodeJS) to decrypt at the server-side.

Following is an example of how you can use the Encryption SDK in NodeJS.

Step 01 — Setting up Encryption SDK for NodeJS

First of all, you need to install the AWS Encryption SDK for NodeJS.

npm install @aws-crypto/client-node

Then require KmsKeyring, encrypt and decrypt methods from the SDK

const { KmsKeyringNode, encrypt, decrypt } = require("@aws-crypto/client-node");

This example uses KMS as the Key management infrastructure. So we have to use KMS Keyring. As the names suggest, encrypt and decrypt methods are used to encrypt and decrypt data with datakeys generated by the keyring.


AWS Encryption SDK for JavaScript uses Keyring to perform Envelope Encryption that is encrypting data keys with the master keys in KMS. You need to provide a reference to the Master Key and then the Keyring will create and manage datakeys to encrypt and decrypt data.

Step 02 — Configuring KMSKeyring with a CMK

Now that we have required the KMSKeyring, let’s configure it using a CMK (Customer Master Key) created in AWS KMS. You can provide the Arn of the CMK to configure with the keyring.

const masterKeyId = "arn:aws:kms:us-east-1:123456:key/beee-abce-..";
const keyring = new KmsKeyringNode({ masterKeyId });

(You can also provide multiple master keys in order to encrypt a datakey multiple times for better security as well)

Step 03 — Creating an Encryption Context

You can optionally create an encryption context for your plain text sensitive data in order to verify if the data has been tampered by anybody at decryption. (It is recommended to create an encryption context).

First, create a context with any useful metadata. Then pass the context to the encrypt method together with the plaintext sensitive data.

let plainText = "My passwords for senstive data";

const context = {
accountId: "100",
purpose: "youtube demo",
country: "Sri Lanka"

Step 04 — Encryption & Decryption

Now that we have the encryption context, we can start encrypting the data with Encryption SDK.

let plainText = "My passwords for senstive data";

const { result } = await encrypt(keyring, plainText, { encryptionContext: context });

We use the encrypt method from the SDK and pass the keyring, plaintext and the encryption context as the parameters. As a result, we get the encryption version of the plaintext sensitive data.

We can decrypt this encrypted data in a different microservice in the distributed system by using the decrypt function of the SDK. The decrypt function expects the encrypted data and the keyring as the parameters.

const { plaintext, messageHeader } = await decrypt(keyring, encryptedData);

It will return the plaintext data and the “messageHeader” as a result of decrypt call. The messageHeader contains the encryption context we added at the point of encryption. So now we can verify that in order to make sure the data has not been tempered.

Step 05 — Verifying Encryption Context

Let’s compare the original encryption context and the context returned by the decrypt call.

let originalContext = {
accountId: "100",
purpose: "youtube demo",
country: "Sri Lanka"

Object.entries(originalContext).forEach(([key, value]) => {
if (messageHeader.encryptionContext[key] === value) {
console.log("Awesome. It is matching!");
} if (messageHeader.encryptionContext[key] !== value) {
throw new Error("Someone has changed the data");

If all the original context attributes are matching, we can conclude that the encrypted data is intact.


Please follow and like us:

Data Encryption on AWS — Part 02

In part 01, we discussed the main concepts around AWS KMS.

OpenSSL and AWS Encryption SDK are used for Client-Side Encryption outside AWS. This blog post is focused on how to interact with KMS using AWS CLI and OpenSSL for data encryption and decryption. In the next part, we will also discuss the AWS Encryption SDK with examples.

Encrypt/Decrypt using OpenSSL

OpenSSL is a full-featured cryptographic library that we can use to communicate with AWS KMS over the command-line interface. (You can install the OpenSSL toolkit for your operating system)

Step 01 — Creating a CMK

Let’s start by creating a CMK in our AWS account. This can be done using the AWS Console, AWS SDKs or AWS CLI. I use the AWS Console.

Login to your AWS account and go to AWS KMS.

Select the region N.Virginia (us-east-1) from the top right side of the console and click “Create Key”

Select Symmetric encryption type and click “Next”. In Symmetric Encryption, the same key is used for both encryption and decryption. AWS recommends using Symmetric CMK for most cases.

“Use a symmetric CMK for most use cases that require encrypting and decrypting data. The symmetric encryption algorithm that AWS KMS uses is fast, efficient, and assures the confidentiality and authenticity of data.” — AWS Documentation

Provide an alias to the key in the next step. Alias is useful to reference the CMK easily.

Tell KMS about the key administrators. By default, the root user has all the permissions. You can select IAM users who can administrate the key and use the key. Click next.

Now select the IAM users who need key usage permissions.

Finally, review the permissions and click finish to create the CMK.

Step 02 — Generating Data-Keys for the CMK

A CMK only allows encrypting data that is less than 4KBs. If we have a large payload to encrypt, then we need Data Keys generated from that CMK. (See the video for more details).

Let’s use AWS CLI to call KMS service and generate data keys for the CMK we just created.

Note: Follow the instruction to install AWS CLI and configure in your operating system.

We refer to the CMK by the alias (e.g. youtube) we have provided during the creation process at step 01.

aws kms generate-data-key --key-id alias/youtube --key-spec AES_256 --region us-east-1

Response (This is mock data)

"KeyId": "arn:aws:kms:us-east-1:123456789:key/bbee76a1-bd25-4d57-81d8-38ff2b26468a",

It returns both the Plaintext version of the data key and the Encrypted or Ciphertext version of the same data key. Both these keys are base64 encoded. So let’s decode and save them into datakey and encrypted-datakey files

echo "7DmPVPgzJ8exc9+AekcEmVL7jdv0RWMxPgA4JlrpE4k=" | base64 
--decode > datakey

echo "ADIDAHiiF6PCTM1Hou+61r+M/pyUfwSizO02mH9+pIa0gaFRWwFF+FoN25Pm+tdPZiB0paGRAAAAfjB8BgkqhkiG9w0BBwabbzBtAgEAMGgGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMIB9YpWJsDdZjP4BVAgEQgDvigjj2IaJoDmXJPS2AWG6OHqMwI8H5ybsS6l0Rt26fVUskQTxxWvCzkLSqssqi3bDnEysfaxN/ryXO7w=="
| base64 --decode > encrypted-datakey

We will use them in the next step.

Step 03 — Encrypting data with Plaintext Data-Key

Now we use the Plaintext data key to encrypt our data.

First of all, we need data to encrypt. Let’s create password.txt file with some data. In general, this will be the sensitive data that we need to protect.

echo "My database password" > password.txt

Now, let’s use the datakey to encrypt our sensitive data. We will output the encoded data into a file called secret.txt.

openssl enc -in ./passwords.txt -out ./passwords-encrypted.txt -e -aes256 -k fileb://./datakey

After encrypting the data, we must NOT forget to delete the plaintext-datakey. Otherwise, anyone can use that key to decrypt our secret data.

rm datakey

Step 04 — Decrypting data with Encrypted Data Key

Now that we have removed the key that was used to encrypt the data, how do we decrypt it at a later point in time?

For that, we use encrypted-data-key that was stored with the encrypted data. We already discussed KMS concepts in-depth in the previous blog post as well as in the Data Encryption on AWS video.

We need to pass the encrypted-data key to KMS and request for the plaintext-data key. It will return the same plain text data key that we used to encrypt the sensitive data.

aws kms decrypt --ciphertext-blob fileb://./encrypted-datakey  --region us-east-1

[Output - Mock data]
"Plaintext": "xyQtd+/oB0ob1Gr9dmkQ4JBSR1+jQRZrK1sLAVdJIHg=",
"KeyId": "arn:aws:kms:us-east-1:123456789:key/beae46a1-bd25-4d37-81d8-38ff1b26469a"

Great! Now we can use this Plaintext data key to decrypt our data. But first, let’s do base64 decode and save it as datakey again.

echo "xyQtd+/oB0ob1Gr9dmkQ4JBSR1+jQRZrK1sLAVdJIHg=" | base64 
--decode > datakey

Now finally we can decrypt our encrypted data with the datakey that we received. The decrypted sensitive data is output to a file called passwords-decrypted.txt.

openssl enc -in ./passwords-encrypted.txt -out ./passwords-decrypted.txt -d -aes256 -k fileb://./datakey

Now if you open the passwords-decrypted.txt you should find the original plaintext data.

Congratulations! We have successfully completed the encryption and decryption of our sensitive data.

Please follow and like us:

Data Encryption on AWS

This blog post is related to Data Encryption on AWS youtube video.

Imagine that your server got hacked. Now, the hacker has full access to the sensitive data stored on the disk. You are in big trouble since you haven’t encrypted that data and the hacker can do whatever he wants with your plain text data.

Encryption is vital if you deal with sensitive data that must not be accessed by unauthorized users. Regulations like GDPR (General Data Protection Regulation) instruct companies to encrypt both data at transit and data at rest. This article is about how to encrypt your data on AWS.

Encryption at Rest vs in Transit

When you deliver your website over HTTPS by associating an SSL certification with your domain, the browser makes sure to encrypt the data in transit. The communication between the browser and the server is encrypted. However, as soon as the data (e.g. username and password) gets to the point where the SSL termination happens (At the server itself, Load Balancer, CloudFront, etc…) the encrypted data is decrypted. After that, the server is storing the plain text (e.g. username and password) in the server storage or in databases. If you want to avoid saving plain text, you have to enable encryption at rest.

Encryption at Rest

This is about encrypting the data that you store in the backend servers and databases. There are two main methods to encrypt data at rest.

  1. Client-Side Encryption
  2. Server-Side Encryption

Client-Side Encryption

As the name implies this method encrypts your data at the client-side before it reaches backend servers or services. You have to supply encryption keys 🔑 to encrypt the data from the client-side. You can either manage these encryption keys by yourself or use AWS KMS(Key Management Service) to manage the encryption keys under your control.

AWS provides multiple client-side SDKs to make this process easy for you. E.g. AWS Encryption SDK, S3 Encryption Client, DynamoDB Encryption Client etc…

Server-Side Encryption

In Server-Side encryption, AWS encrypts the data on your behalf as soon as it is received by an AWS Service. Most of the AWS services support server-side encryption. E.g. S3, EBS, RDS, DynamoDB, Kinesis, etc…

All these services are integrated with AWS KMS in order to encrypt the data.


AWS KMS (Key Management Service) is the service that manages encryption keys on AWS. These encryption keys are called “Customer Master Keys” or CMKs for short. KMS uses Hardware Security Modules (Physical devices, commonly known as HSM) to store CMKs. AWS KMS is integrated with many AWS services and it uses AWS CloudTrail to track the usage logs of the keys for audit and compliance needs.

Customer Master Keys(CMKs) VS Data Keys

CMKs are created and managed by AWS KMS. However, CMK is only used to encrypt a small amount of data less than 4KBs. AWS does not encrypt the gigabytes of data using CMK. If you have large data to encrypt, then use Data Keys.

Data Keys are generated from CMKs. There is a direct relationship between Data Key and a CMK. However, AWS does NOT store or manage Data Keys. Instead, you have to manage them.

Look at the following diagram.

Image 1 — Generate Data Keys from a CMK (Ref — AWS Documentation)

You can use one Customer Master Key (CMK) to generate thousands of unique data keys. You can generate data keys from a CMK using two methods.

  1. Generate both Plaintext Data Key and Encrypted Data Key
  2. Generate only the Encrypted Data Key

Image-1 illustrates how to generate both plain-text and encrypted data keys using a CMK.

Encrypt/Decrypt Data

Once you get the Plaintext data key and Encrypted data key from CMK, use the Plaintext data key to encrypt your data. After encryption, never keep the Plaintext data key together with encrypted data(Ciphertext) since anyone can decrypt the Ciphertext using the Plaintext key. So remove the Plaintext data key from the memory as soon as possible. You can keep the Encrypted data key with the Ciphertext. When you want to decrypt it, call the KMS API with the encrypted data key and KMS will send you the Plaintext key if you are authorized to receive it. Afterward, you can decrypt the Ciphertext using the Plaintext key.

Envelope Encryption

The method of encrypting the key using another key is called Envelop Encryption. By encrypting the key, that is used to encrypt data, you will protect both data and the key.

Image 2 — Envelop Encryption (Ref — AWS Documentation)

In AWS you can encrypt the Data key used to encrypt the Data with Customer Master Key(CMK). But, where do you store the CMK? AWS KMS will store it inside Hardware Security Module (HSM) with a greater level of protection. (HSM is compliant with FIPS 140–2 security standard)

Key Policies

One of the powerful features in KMS is the ability to define permission separately for those who use the keys and administrate the keys. This is achieved using Key Policies.

"Sid": "Enable IAM User Permissions",
"Effect": "Allow",
"Principal": {"AWS": "arn:aws:iam::111122223333:root"},
"Action": "kms:*",
"Resource": "*"

The above key policy is applied to the root user of the account. It allows full access to the CMK that this policy is attached. When it comes to other users and roles you can manage key usage and key administration as follows.

"Sid": "Allow use of the key",
"Effect": "Allow",
"Principal": {"AWS": "arn:aws:iam::111122223333:user/manoj"},
"Action": [
"Resource": "*"

The above policy is applied to the IAM user ‘manoj’. Now he has permission to use the CMK for encryption and decryption. However, he is not allowed to administrate that CMK.

"Sid": "Allow access for Key Administrators",
"Effect": "Allow",
"Principal": {"AWS": [
"Action": [
"Resource": "*"

Now, the above key policy allows administrators to administrate the CMK that it is applied to. However, the administrator cannot use the key to Encrypt or Decrypt data.

Key Rotation

Cryptographic best practices discourage extensive use of encryption keys. Because of that, AWS allows rotating the Customer Master Key(CMK). You can enable the automatic renewal option for the CMKs that you have created in KMS. The CMKs that you have created in KMS are commonly known as Customer Managed CMKs. Once you enable automatic key renewal, KMS renews the key’s cryptographic material(Backing Key) in every year. However, CMKs managed by AWS are only renewed every three years. You cannot change the renewal frequency for AWS managed CMKs.

Reference – AWS Documentation

It is important to understand that AWS KMS saves references to the older backing keys when renewing. So that KMS is able to decrypt data or data keys that were generated by older versions of backing keys. Otherwise, those data/data keys can never be decrypted.

In the next post, let’s discuss S3 and EBS encryption.

Please follow and like us:

Improving the UX of your website with an intelligent chatbot | AWS Lex

Note: The video series of this blog post is available here.

User Experience is one of the most important concerns in building modern web applications. No matter how feature-rich is your website, if people don’t find it intuitive to use, you will not reach out to your potential customers.

AWS Lex allows users to easily interact with your website using natural language via a conversational chatbot. Your users can chat in voice or in text and use your product services without having to go through complex user interfaces.

As a developer, you don’t have to be a Machine Learning/Deep Learning expert to have a chatbot embedded into the website. AWS Lex provides advance deep learning functionalities and automatic speech recognition techniques out of the box.

In this post, we are going to create a conversational bot that finds weather information as per user’s requests. Following are the technologies that we will be using in this project.

  1. Amazon Lex
  2. Amazon Cognito
  3. AWS Lambda
  4. AWS IAM
  5. Amplify Library
  6. Angular Framework

Please find the github repo related to this post at

Getting the bot ready for training

First, let’s create our bot and get it ready for training. We want our bot to search for weather information when a user has requested it. In this guide, let’s only consider communication via text.

Login to AWS console and Go to AWS Lex

If you haven’t created a bot before select get started and you will be directed to the create bot window.

Creating a Custom Bot

We can either select already created bot or create our custom bot. Let’s select Custom Bot.

Let’s give our bot a name(I named it “WeatherBot”)and fill the other configurations as above image. Since we don’t enable voice select text-based application option. Afterward, click create and you will be presented with a new screen as below.

We need to understand the terminology for our bot in AWS Lex. There are 5 main points to remember.

  1. Intents
  2. Utterances
  3. Slots
  4. Prompts
  5. Fulfillment

Intents are the intentions why someone would use the bot. In our example, someone might want to know the weather of a particular city in the world. We will have an intent called “FindWeather”. We can have more than one intents for the bot. Another intent would be “GreetUser”. Utterances are how a user interacts with the bot stating different phrases. An example utterance can be “How is the weather in Colombo?”

When a user utters a phrase in text or invoice, the bot will match it to a corresponding intent. “How is the weather in Colombo” utterance will be matched to “FindWeather” intent. In order to fulfill the user’s intent, the bot will ask other required questions from the user. These questions are called “Prompts”. Once a user replied to a prompt that will be stored in variables for later use. These variable are called “Slots”. When all the required slots for a intent is collected, the bot will fulfill the intention of the user. The “fulfillment” may involve calling a 3rd party service to search for information, talking to a database, executing some logic in lambda function etc… Our example WeatherBot will talk to OpenWeatherMap API with the user requested city to search for weather information and send it back to the user.

Creating Intents

Click “Create Intent” button to create the first intent for our WeatherBot. Let’s call it “FindWeather” and Add it.

Now let’s add some sample utterances a user might ask. They will help our chatbot to learn about user inputs.

I have added three utterances as shown above. Note that two of those utterances has {City} variable. This is a required slot value for FindWeather intent. We can fill the slot using the user’s utterance itself. “Tell me about the weather” utterance doesn’t have {City} slot involved. So the bot will question the user about the city using a prompt.

In the Slots section, let’s add our City slot. We can define the Prompt message together with that. Our bot can use the intent to get the City slot filled from the user if it didn’t receive it already from the user’s initial utterance.

Our bot requires only one variable to search for the weather. That is the City. Once it is received from the user, the bot will call the action for Fulfillment of the intent.

In our case, let’s call a Lambda function that talks to OpenWeather API to search the weather for the requested City.

Creating the Lambda Function

Let’s use the Serverless Framework to create our lambda function. If you haven’t already installed Serverless Framework please visit this link.

Once you have installed the serverless framework and configure credentials with your AWS account, create a serverless service using below command.

serverless create --template aws-nodejs --path weather-bot

Once creation is completed, change directory into weather-bot folder and open the files in your favorite IDE.


Let’s add a simple function as shown above. I will call it getWeather. The logic of the function lies in the handler.js file.

Lambda logic to obtain weather info from OpenWeatherMap API

I have obtained a free API key from and added it as a query parameter(APPID) in the URL. I extracted the “City” slot value that was taken by the user and passed it in the URL as well. “units” parameter is set to “metric” in order to get temperature values in Celsius.

I also used “axios” npm library to easily send HTTP requests to the server. You may also use the default “http” node module if you prefer. Note how the answer variable is constructed. It is created with string concatenation to form a natural language like the response. This response is returned from the lambda function as a special JSON object. This is required for our bot to read the answer properly. For more information about request/response JSON object template supported by AWS Lex see this link from official documentation.

Alright! Let’s deploy our lambda function using the following command. Before that, don’t forget to install axios library from npm

// First run
npm install axios
// Then run
serverless deploy

Once it is successfully deployed, go back to Lex console and select the lambda function name under the Fulfillment section.

Testing our bot

We have done all the configuration required from our side. Now let’s build the bot and let AWS Lex train its neural network. Once the build is complete we can test our bot.

Once it showed the success message, you can start testing it in the console itself.

The bot further asks a question since we didn’t provide it the city name.

As you can see, our bot successfully connected to OpenWeatherMap API sent us the Weather information for London city.

Adding our bot to the website

Now that we have an awesome bot, we need to add it to our production website so our users can directly interact with it.

Let’s create an angular website and use AWS Amplify library to connect to our WeatherBot securely. We need to install angular CLI and amplify CLI globally and configure amplify to with AWS credentials.

// Install angular cli globallynpm install -g @angular/cli
// Create a new angular projectng new my-bot-website --style=scss --routing
// Install amplify library globally
npm install -g @aws-amplify/cli
// Configure amplify with AWS IAM credentials
amplify configure

In order to communicate with our bot securely, we have to make sure only the logged in user can talk to it. As for the next steps let’s add a Login to the website and then use Amplify out of the box interaction component to connect to our WeatherBot.

Once a user is successfully logged into the application, AWS Cognito assigns him an IAM role. The Permission/Policies for invoking backend services are associated with this role. Since our user will communicate with the AWS Lex Chatbot, we need to provide permission for that authenticated role to call AWS Lex services. You can find the IAM role name that is assigned to logged in user at Cognito Identity Pool configuration.

I’m not going to add those steps in this blog as this post is already lengthy. Instead, let me share the Github URL for the code here.

You can clone the code from the repo and run amplify init to initialize with your AWS resources. To run it on browser use amplify serve

You can use Amplify library itself to host your website along with the bot in an S3 bucket.

I hope someone will find this post useful.


Please follow and like us:

Sentiment Analysis with AWS Comprehend | AI/ML Series

In the last post we discussed on how to add speaking ability to our applications using AWS Polly. Let’s extend the same example to analyze the sentiments of the text that user types.

As usual, I recommend to watch the following video before reading this blogpost and use this post as a reference when building out the application by your own.

AWS Comprehend Service

AWS comprehend uses NLP to extract the insight about the content without needing any preprocessing requirements. It is capable of recognizing Entities, Languages, Sentiments, Key Phrases and other common elements of the given text or the document. One of the common use case of AWS Comprehend is to analyze the social media feed about your product and take necessary actions upon analyzing users valid sentiments.

Calling Comprehend API Methods

Let’s use AWS Lambda, our serverless function to talk to AWS Comprehend service and do a sentiment analysis. We are going to be using the API methods detectSentiment and detectDominantLanguage from AWS Comprehend javascript SDK. Refer the full SDK documentation here.

Firstly, we are creating an endpoint that triggers the Lambda function. Goto your serverless.yml and add this piece of code.

handler: handler.analyze
- http:
path: analyze
method: post
cors: true

It will create a new endpoint in the API Gateway with the path /analyze that will trigger analyze Lambda function. Here is the analyze function code which needs to be in the handler.js.

module.exports.analyze = (event, context, callback) => {
let body = JSON.parse(event.body);

const params = {
Text: body.text

// Detecting the dominant language of the text
comprehend.detectDominantLanguage(params, function (err, result) {
if (!err) {
const language = result.Languages[0].LanguageCode;

const sentimentParams = {
Text: body.text,
LanguageCode: language

// Analyze the sentiment
comprehend.detectSentiment(sentimentParams, function (err, data) {
if (err) {
callback(null, {
statusCode: 400,
headers: {
"Access-Control-Allow-Origin": "*"
body: JSON.stringify(err)
} else {
callback(null, {
statusCode: 200,
headers: {
"Access-Control-Allow-Origin": "*"
body: JSON.stringify(data)

At the top of the handler function, you need a reference to the Comprehend API from AWS-SDK. Then let’s first identify the dominant language of the text by calling detectDominantLanguage API method and pass that language code to the next API call detectSentiment inside the callback of the first method.

As a result, you will get the matching Sentiment and the matching percentage of Negative, Positive, Neutral and Mixed sentiment. Now, send that back to the frontend.

IAM Permission for AWS Comprehend

We are now almost finished with the backend, except we have to add a policy that allows AWS Comprehend permission to the IAM role attached to the Lambda function. If you haven’t read the part 01 of this series, read/watch it where I showed you how to setup an IAM role for the lambda.

Our IAM role was youtube-polly-actual-role. It had an arn and we refereed it in the serverless.yml file as follows.


Goto IAM console of your AWS account and attach a new policy to that same role as shown below.

Setting up the Frontend

We have been using an Angular app as the frontend in the earlier project. Let’s continue adding a button below the user text area and call our API endpoint.

Goto app.component.html and add this simple html code to display an additional button next to “speak” button. We will display the returned sentiment value with a color below the button as well.

<div style="margin: auto; padding: 10px; text-align: center;">
<h2>Write Something...</h2>
<textarea #userInput style="font-size: 15px; padding: 10px;" cols="60" rows="10"></textarea>
<select [(ngModel)]="selectedVoice">
<option *ngFor="let voice of voices" [ngValue]="voice">{{voice}}</option>
<div style="margin-top: 10px">
<button style="font-size: 15px;" (click)="speakNow(userInput.value)">Speak Now</button>
<button style ="font-size: 15px;" (click)="analyze(userInput.value)">Analyze</button>

<!-- Following section will show the returned sentiment value with a suitable color -->

<h2 *ngIf="sentiment=='POSITIVE'" style="color: green;">{{sentiment}}! </h2>
<h2 *ngIf="sentiment=='NEUTRAL'" style="color: orange;">{{sentiment}} </h2>
<h2 *ngIf="sentiment=='NEGATIVE'" style="color: red;">{{sentiment}}! </h2>

Let’s add the analyze function in the app.compotent.ts file and make use of a service to call the API Gateway Endpoint.

import { Component } from '@angular/core';
import { APIService } from './api.service'

selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.scss']

export class AppComponent {
sentiment = null;
constructor(private api: APIService){}

analyze(input) {
data = {
text: input
.api.analyze(data).subscribe((result:any) => {
.sentiment = result.Sentiment;


Let’s call frontend API service to call the /analyze endpoint and return the data. Goto api.service.ts and add this code.

import { Injectable } from '@angular/core';
import { HttpClient } from '@angular/common/http';

providedIn: 'root'
export class APIService {


constructor(private http:HttpClient) {}

speak(data) {
return + '/speak', data);

analyze(data) {
return + '/analyze', data);

Our frontend is now completed. It will send the user input to the backend endpoint and lambda function will figure out the language of the text and send sentiment analysis.



Please follow and like us:

Building a Profile App – Part 01

This blog post is connected to the following youtube video. I would recommend you to watch the video first and use this blog post to copy the code snippets and build the application by yourself.

Watch the video here

Creating the Angular App

Let’s start a new angular application using ng new command.

ng new profileApp

Select YES for angular routing and select SCSS when prompted from the CLI. After the project is created, change directory into the profileApp folder by,

cd profileApp

Now let’s create two components for Login page and Profile landing page.

ng g c auth
ng g c profile

Installing Amplify Libraries

It’s time to add amplify and aws-appsync libraries. Firstly, install the amplify cli globally and configure it with your AWS account.

npm install -g @aws-amplify/cli
amplify configure

Afterwards, we need to install amplify, amplify-angular, app-sync and graphql-tag libraries as we are to use them in our profile app.

npm install --save aws-amplify
npm install --save aws-amplify-angular

Additional configuration for the Angular App

We need to add some polyfills and additional configurations to get amplify and appsync work with our angular application. Otherwise you’ll waste much time troubleshooting errors.

In the polyfills.ts file (src/polyfills.ts) add following two lines on top of the file.

(window as any).global = window; 
(window as any).process = { browser: true };

Also goto index.html (src/index.html) and add the following script within the head tags.

if(global === undefined) {
var global = window;

Now goto (src/ and add “node” as the compilerOptions type.

"extends": "../tsconfig.json",
"compilerOptions": {
"outDir": "../out-tsc/app",
"types": ["node"]
"exclude": [

Initializing an Amplify Project on Cloud

At this point, we can initialize an amplify project using the amplify cli.

amplify init
## Provide following answers when prompted

? Enter a name for the project profileApp
? Enter a name for the environment dev
? Choose your default editor: Visual Studio Code
? Choose the type of app that you're building javascript
Please tell us about your project? What javascript framework are you using angular
? Source Directory Path: src
? Distribution Directory Path: dist/profileApp
? Build Command: npm run-script build
? Start Command: ng serve
## Choose your aws profile when prompted as well

After the process is completed, let’s add two amplify categories for auth and api.

amplify add auth
## Provide following answer for the prompt

Do you want to use the default authentication and security configuration? Yes, use the default configuration.
amplify add api
## Provide following answers for the prompts

? Please select from one of the below mentioned services GraphQL
? Provide API name: profileapp
? Choose an authorization type for the API Amazon Cognito User Pool
Use a Cognito user pool configured as a part of this project
? Do you have an annotated GraphQL schema? No
? Do you want a guided schema creation? Yes
? What best describes your project: Single object with fields (e.g., “Todo” with ID, name, description)
? Do you want to edit the schema now? Yes

Now, amplify will open the schema.graphql file with sample model. While command prompt is open, replace the content with the following graphql model and save the file and press enter to continue in the command prompt.

type User @model {
id: ID!
username: String!
firstName: String
lastName: String
bio: String
image: String

At this point, we have created the templates for all the AWS resources locally. We need to push the template and actually create the services. To do that type,

amplify push
## Provide following answers when prompted

? Are you sure you want to continue? Yes
? Do you want to generate code for your newly created GraphQL API Yes
? Choose the code generation language target angular
? Enter the file name pattern of graphql queries, mutations and subscriptions src/graphql/*/.graphql
? Do you want to generate/update all possible GraphQL operations - queries, mutations and subscriptions Yes
? Enter maximum statement depth [increase from default if your schema is deeply nested] 2
? Enter the file name for the generated code src/app/API.service.ts

It will take a few minutes to provision the resources on AWS. Be patient 🙂

Configuring Amplify Libraries with the App

Now that we have configured the resources on AWS, it creates a new file i.e. aws-exports.js file with all the configuration details of those services in the frontend directory structure.

Let’s use that file and setup the initiate connection from Angular frontend to AWS backend.

Goto main.ts (/src/main.ts) file and configure amplify.

import Amplify from 'aws-amplify';
import amplify from './aws-exports';


Now let’s import amplify-angular library to use the already configured higher order components for our login.

Goto app.module.js and import AmplifyAngularModule and AmplifyService.

import {AmplifyAngularModule, AmplifyService} from 'aws-amplify-angular';

declarations: [
imports: [
providers: [AmplifyService]

Now we can use <amplify-authenticator></amplify-authenticator> component directly in the auth component html and implement a complete login functionality. (Magical!)

But before that let’s setup our routes in app-routing.module.js file. We have two basic routes. One for the login screen and the other for our profile component.

In the app-routing.module.ts file add the routes.

 const routes: Routes = [{
path: "profile",
component: ProfileComponent
path: "login",
component: AuthComponent
path: '**',
redirectTo: 'login',
pathMatch: 'full'

Adding the Login Component

It’s time to add the login screen. Goto auth.component.html and add this code. It will turn in to a login screen.


Before running the application to check login screen, you need to update the styles in of amplify-authenticator in styles.scss file.

Add this line of css in the style.scss file (src/styles.scss)

@import '~aws-amplify-angular/theme.css';

We need to remove the default content that angular has added in app.component.html file. So let’s do that too. Your app.component.html should look like this, when you remove the default code


Okay. Now let’ run ng serve and check the output!

Figure 01 – Login Page

Styling with MDBootStrap

Now we need to build the Profile component. But before that, let’s configure MDBootStrap with our project to add styles to the profile component easily.

npm i angular-bootstrap-md --save

npm install -–save chart.js@2.5.0 @types/chart.js @types/chart.js @fortawesome/fontawesome-free hammerjs 

To app.module.ts add,

import { MDBBootstrapModule } from 'angular-bootstrap-md'; 

@NgModule({ imports: [ MDBBootstrapModule.forRoot() ] });

In the angular.json file replace styles and scripts sections with,

"styles": [
"scripts": [

Adding the Profile Component

Now let’s edit the profile component. In profile.component.html, add following html code,

<!-- Navigation Bar -->
<nav class="navbar navbar-expand-lg navbar-dark default-color">
<a class="navbar-brand" href="#"><strong>Profile</strong></a>
<button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarSupportedContent"
aria-controls="navbarSupportedContent" aria-expanded="false" aria-label="Toggle navigation">
<span class="navbar-toggler-icon"></span>
<div class="collapse navbar-collapse" id="navbarSupportedContent">
<ul class="navbar-nav ml-auto">
<li class="nav-item">
<a class="nav-link" href="#"> Hello {{userName}}!</a>
<li class="nav-item active">
<a class="nav-link"> Logout <span class="sr-only">(current)</span></a>

<!-- Main Content -->
<main class="text-center my-5">
<div class="container">
<h2>My Profile</h2>
<div class="form-group row">
<label for="firstName" class="col-sm-2 col-form-label">First Name</label>
<div class="col-sm-10">
<div class="md-form mt-0">
<input type="text" class="form-control" id="firstName" name="firstName" [(ngModel)]="user.firstName">
<div class="form-group row">
<label for="lastName" class="col-sm-2 col-form-label">Last Name</label>
<div class="col-sm-10">
<div class="md-form mt-0">
<input type="text" class="form-control" id="lastName" name="lastName" [(ngModel)]="user.lastName">
<div class="form-group row">
<label for="aboutMe" class="col-sm-2 col-form-label">About Me</label>
<div class="col-sm-10">
<div class="md-form mt-0">
<textarea id="aboutMe" name="aboutMe" [(ngModel)]="user.aboutMe" class="form-control md-textarea" length="120"
<div class="form-group row">
<div class="col-sm-3">
<button type="submit" class="btn btn-primary btn-lg" (click)="updateProfile()">Update</button>

In the form, we data binding to a model called user. Let’s add that model and import it to the profile.component.ts file.

// Generate a typescript class
ng g class User

Add following code to user.ts,

Since we need to use ngModel in the profile component, we should import FormsModule into the app.module.js

import { FormsModule} from '@angular/forms';

imports: [

Okay, now we need to implement updateProfile() function to grab the data from the form and store the data in the DynamoDB table.

export class User {
public id: string,
public username: string,
public firstName: string,
public lastName: string,
public aboutMe: string,
public imageUrl: string

In the profile.component.js file add,

import { Component, OnInit } from '@angular/core';
import { APIService } from '../API.service';
import { User } from '../user';
import { Auth } from 'aws-amplify';

selector: 'app-profile',
templateUrl: './profile.component.html',
styleUrls: ['./profile.component.scss']

export class ProfileComponent implements OnInit {
userId: string;
userName: string;
user = new User('', '', '', '', '', '');

constructor(private api: APIService) {}

ngOnInit() {
bypassCache: false
}).then(async user => {
this.userId = user.attributes.sub;
this.userName = user.username;
.catch(err => console.log(err));

async updateProfile() {
const user = {
id: this.userId,
username: this.user.firstName + '_' + this.user.lastName,
firstName: this.user.firstName,
lastName: this.user.lastName,
bio: this.user.aboutMe
await this.api.CreateUser(user);

updateProfile function will get the firstName, userName, lastName, and bio information from the form inputs.

But the “id” attribute has to be taken from currently authenticated user.

Now, let’s run ng serve and goto “/profile” path to view our profile page.

Figure 02 – Profile Page

In the second part of this blog we are going to add following functionalities to our profile app.

  • Loading the saved user data
  • Ability to securely upload the profile image
  • Adding auth guard for the profile component so that unauthorized users will not have access to profile page
  • Automatically redirecting to profile page after a successful login

So guys, I hope this has been useful to you. I’ll see you in the next part.

Stay tuned!

Please follow and like us: