AWS IAM – Summary

This post will provide you a summary for AWS IAM (Identity and Access Management) service. It is recommended to read before the exam(CSA-Associate or CSA-Pro) to refresh the knowledge. The following topics are covered.

  • IAM Introduction
  • IAM Best Practices
  • Different Types of Policies
  • Policy Evaluation
  • Identity Federation
  • STS API Methods

IAM Introduction

AWS IAM is a global service that you can use to manage access to AWS services and resources. Access can be granted to IAM users, groups and roles using permission policies.

Long Term Credentials

When you create an IAM user, you can assign him/her Access keys and User passwords which are considered to be long term credentials. Long credentials do not get expired. So never expose them to anybody else!

Temporary Credentials

IAM Role, on the other hand, provides short term temporary credentials to whoever assumes the role. These credentials get expired within 15mins to 12 hours and need to be refreshed.

IAM Role can be assumed by an IAM User, Group, Another Role in the same or different AWS Accounts. It can also be assumed by AWS services like EC2, Lambda, etc… or it could be federated users like Users in Active Directory of an on-premises organization, Web identities (Facebook users, Twitter Users, etc..)

IAM Role

An IAM Role has two main parts. Permission policy and Trust policy. (Theses policies are JSON objects) The permission policy describes the permission of the role. Trust policy describes who can assume that role. (E.g. EC2 service, Lambda service, A specific AWS Account, etc…)

Once the IAM Role is assumed by an allowed entity, AWS STS (Security Token Service) provides the temporary security credentials to the entity. The temporary security credentials contain the following information.

  • Session Token
  • Access Key ID
  • Secret Access Key
  • Expiration

When an IAM user from a different AWS Account assumes an IAM role of another account (E.g. Using the Switch Account feature in AWS Console or using the API) the temporary credentials from STS will replace his/her existing credentials of the trusted account – The account he is from.

IAM Best Practices

Following are the best practices for using IAM service. They must be thoroughly followed as IAM is the centralized service for Security in the AWS platform.

  • Lock Away Your AWS Account Root User Access Keys
  • Create Individual IAM Users
  • Use Groups to Assign Permissions to IAM Users
  • Grant Least Privilege
  • Get Started Using Permissions with AWS Managed Policies
  • Use Customer Managed Policies Instead of Inline Policies
  • Use Access Levels to Review IAM Permissions
  • Configure a Strong Password Policy for Your Users
  • Enable MFA for Privileged Users
  • Use Roles for Applications That Run on Amazon EC2 Instances
  • Use Roles to Delegate Permissions
  • Do Not Share Access Keys
  • Rotate Credentials Regularly
  • Remove Unnecessary Credentials
  • Use Policy Conditions for Extra Security
  • Monitor Activity in Your AWS Account

For more information about the best practices, follow this link to the AWS documentation.

Different Types of IAM Policies

There are three major types of IAM policies used to control access in AWS.

  1. Service Control Policies
  2. Identity-Based Policies
  3. Resource-Based Policies

Service Control Policies

Service Control Policies (SCPs) are used to manage all the AWS Accounts in your AWS Organization. SCPs can be applied at individual AWS accounts level or at Organizational Units (OUs) level inside your AWS Organization to control the maximum permission. (Applying the SCP to an OU means applying the same policy to all the AWS accounts under that OU).

In order to apply SCPs, you must enable “All Features” in the organization. SCPs aren’t available if your organization has enabled only the consolidated billing features.

If multiple SCPs affects an AWS Account, the maximum permission is determined by the overlap.

You can either use Whitelisting or Blacklisting of permissions when using SCPs to control maximum permissions. Blacklisting permissions provides less admin overhead when managing many AWS accounts.

SCPs are not a replacement to AWS IAM policies. They are only used to control what a specific AWS account can or cannot do. IAM policies are still required to manage access for different entities within the AWS account.

If you blacklist a resource/service for an AWS account using SCPs, that cannot be accessed even to the root user of that account.

Identity-Based Policies

Identity-based policies are attached to an IAM user/group or a role. It specifies what the identity (user/group/role) can do. For example, you can allow an IAM user “John” to read from a certain S3 bucket (E.g. mybucket) and Deny spinning up any EC2 instances.

{
   "Version": "2012-10-17",
   "Statement": [
    {
      "Effect": "Allow",
      "Action": "s3:GetObject",
      "Resource": ["arn:aws:s3:::mybucket/*"]
    },
   {
      "Effect": "Deny",
      "Action": "ec2:RunInstances",
      "Resource": "*"   
   }]
}

We can set up quite granular access controls using Identity-Based Policies. For Identity-based policies, it’s not mandatory to mention the “Principal” – (to whom the policy gets applied to), as it is implied from the attached identity. Identity-based policies can be managed( Managed by AWS or Customer. These can be reused among many identities) or inline policies (Applied only to a certain identity).

Resource-based Polices

Resource-based policies are applied to a resource rather than to an identity. You can attach resource-based policies to S3 buckets, SQS queues, etc… With resource-based policies, you can specify who has access to the resource and what actions they can perform on it. Resource-based policies are inline only, not managed.

{
  "Version":"2012-10-17",
  "Statement":[
    {
      "Effect":"Allow",
      "Principal": "*",
      "Action":["s3:GetObject"],
      "Resource":["arn:aws:s3:::mybucket/*"]
    }
  ]
}

Above policy can be applied to the “mybucket” s3 bucket to Allow only read-only access to the content of the s3 bucket to anybody. Note that we must specify “Principal” – to whom the policy gets applied to. In the above policy, it’s anyone.

Policy Evaluation

When a principal tries to access an AWS resource, there could be multiple types of policies applied. (SCPs, Identity-based policies, resource-based policies). AWS evaluates all these policies before allow or deny access to the resource for the principle. The main logic behind policy evaluation is as follows.

  • The decision starts at Deny
  • Evaluate all the applicable policies to the resource and to the principal
  • Explicit “DENY” policies override any Explicit “Allow” policies
  • If there is no explicit “DENY” polices defined but only Explicit “Allow” polices, the access is granted to the resource
  • If there are no explicit policies defined for the resource (ALLOW or DENY) implicit “DENY” is applied by default. So the access to the resource is denied.

Evaluating Policies Within a Single Account

Following steps take place when an IAM User/Role try to access a resource within a single AWS account.

  • Check if Service Control Policies (SCPs) permits the action. If not Deny access immediately.
  • If SCP allows, then check the Identity-based policy(IAM policy) attached to the identity and the resource-based policy attached to the resource. For example, if IAM user, John is accessing an S3 bucket, check IAM policy attached to John and also check resource-based policy attached to that S3 bucket.
  • If either Identity-based policy OR resource-based policy allows the action, Allow John accessing the S3 bucket.

Evaluating Cross Account Polices

When a user from a different AWS account wants to access a resource of another AWS account then the policies are evaluated as follows.

  • Check if Service Control Policies (SCPs) permits the action. If not Deny access immediately.
  • If SCP allows, then check the Identity-based policy(IAM policy) attached to the identity and the resource-based policy attached to the resource.
  • If BOTH Identity-based policy AND resource-based policy allows the action, Allow the user to access the resource.

Identity Federation

You can create up to 5,000 IAM users per AWS account. Imagine your organization has 10,000 users who need access to an AWS account. First of all, it is not allowed to create 10,000 IAM users. Even if it was possible, administrating 10,000 IAM users can be a nightmare. This is where the identity federation comes into the play.

Identity federation means outsourcing identity management to an external party. It could be google, facebook, twitter, on-premises active directory. Google, Facebook, Twitter, etc.. are examples of Web Identities. Active Directory, on the other hand, is an example of a Co-operate identity manager.

Identity federation is based on the trust between AWS and the External Identity provider(Web or Co-operate). The process of AWS identity federation is as follows.

  • Create an AWS IAM role that can be assumed by a federated identity
  • Provide required permission to that IAM Role.
  • Configure AWS and the external IDP (Identity Provider)
  • If it is web identity like facebook or twitter, create an app on facebook or twitter and configure app ids and secrets in AWS
  • If it is corporate identity like Active Directory, upload metadata document or link metadata URL in AWS. Do the necessary configuration at the on-premises side as well.
  • When a user wants to access a resource in AWS, direct him/her to the login page of web identity (login with facebook, etc…) or the login page of Active Directory.
  • Once the user has successfully logged in, send a confirmation token to AWS. In the case of web identity, it could be id_token, If it is Active Directory it could be SAML assertion.
  • AWS will trust the assertion and allow the user to assume the IAM Role thereby access the authorized resources in AWS.

STS API Methods

AWS IAM Roles provides temporary credentials to whoever authorized to assume an IAM Role. Temporary credentials are supplied by AWS Security Token Service (STS) by evaluating the permission polices attached to the role.

There are Five main API methods provided by AWS STS.

  • AssumeRole
  • AssumeRoleWithWebIdentity
  • AssumeRoleWithSAML
  • GetFederationToken
  • GetSessionToken

AssumeRole

This is typically used for cross-account access. The AWS account who wants to share a resource with another AWS account can create an IAM Role. Add the permission policy and the trust policy to the role. We can refer the other AWS account id in the trust policy. So the other account can use “Switch Role” feature in the AWS Console to enter the role name and account id of the resource sharing account and get access to the resource

AssumeRoleWithSAML

This API call returns a set of temporary security credentials for users who have been authenticated via a SAML authentication response. Typically used for Active Directory federation.

AssumeRoleWithWebIdentity

This API call returns a set of temporary security credentials for users who have been authenticated in a mobile or web application with a web identity provider. If you have a mobile app where the users need to access an AWS resource (E.g. Upload profile photo to a S3 bucket) you can set up the IAM role, configure web identity and allow all authenticated (With facebook, google, etc..) federated users to assume that role.

GetFederationToken

GetFederationToken API call needs Long Term Credentials of an IAM User instead of IAM Role. Because of that, use this API method only in a safe environment where the long term credentials can be stored.

It returns a set of temporary security credentials (consisting of an access key ID, a secret access key, and a security token) for a federated user. Typical use is in a proxy application that gets temporary security credentials on behalf of distributed applications inside a corporate network.

GetSessionToken

Typically used to receive temporary credentials for Untrusted environments. It returns a set of temporary credentials for an AWS account or IAM user. The user must already be an IAM user. When he wants to access AWS resource in untrusted environments, he can use MFA to protect calls to AWS with GetSessionToken call.

References

https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html

https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_about-scps.html

https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html

https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_compare-resource-policies.html

https://aws.amazon.com/identity/federation/

https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html

Static Web Hosting with Amazon S3 and Amazon CloudFront

Following blog post is linked with the video on static web hosting with Amazon S3 and CloudFront. You can find it here. This blog post covers the theoretical aspects of S3, CloudFront and Cloud9 services.

If you want to deploy your react/angular/vue.js applications Amazon S3 static web hosting is the best choice on AWS platform. Your sites will receive 99.999999999% durability and 99.99% availability just by only deploying to S3. That means, the website assets i.e. images/videos/html/js files will almost never be lost and the site will be available to the users 99.99% of the time. All that is provided to you at a low cost, no server management (serverless) and high scalability.

Static Web Applications

What is a static web application? Static web applications have web pages with static content. It may contain HTML, JavaScript, CSS etc. That web application is typically interacting with a backend to send and receive data. The backend could be a REST or GraphQL backend.

Static web application does not have dynamic content. It does not rely on server-side processing including server-side scripts such as PHP, ASP.net and JSP. If you upload these files to an S3 bucket, it will not function as it should since S3 doesn’t support server-side scripting.

The Architecture

Architecture
Figure 01

In the above simple architecture, we are adding the website built code which is just HTML, CSS and Javascript into the S3 bucket. Then we are serving the website via a CloudFront distribution. Amazon CloudFront service is used as a Content Delivery Network (CDN) which operates at the Edge.

Amazon S3

S3 is short for Simple Storage Service. It is one of the cost-effective services to host static content like images/videos/files on the AWS cloud. Amazon S3 is an “object storage” where you can store objects (e.g. images, videos, files) as a whole with metadata associated with these objects. So each object is self sustained and thus enables S3 to facilitate distributed storage architecture.

Since Amazon S3 is an Object Storage, it cannot be used for block storage. For example, you cannot host an operating system (eg: Linux/Windows) on an S3 bucket. For that, you should use EBS (Elastic Block Storage) volume attached to an EC2 instance. In block storage, a file is divided into equally sized units and be stored. When retrieving the complete file, it uses an index to find the related units and put them together to create the full file. Each unit does not contain metadata so that is not self-sustained or comprehensible when viewed individually.

S3 – Life Cycle Management

Amazon S3 supports Life Cycle Management of the objects. That means you can set rules for each object (file/image/video) to change the storage type with time. You can move a file from frequently accessed file type to an infrequently accessed file type to even archiving that file with Amazon Glacier. AWS also supports S3-Intelligent-Tiering Storage class where S3 will monitor the access behavior of the objects in buckets and automatically

S3 – Encryption

S3 also supports Encryption of objects using S3 Server-Side encryption and S3 Client-Side encryption options. If you enable server-side encryption, S3 will encrypt objects before saving and decrypt objects before reading/downloading the objects. S3 client-side encryption allows you to manage the encryption process by yourself at the client side.

S3 – Versioning

S3 versioning allows you to keep versions of the s3 objects. When you enable versioning for a bucket, updates to an object will always be a new version. You can easily rollback to earlier version if required. If you delete an object S3 will not delete the object if the versioning is enable. It will add a delete marker on top of the object instead. It is always possible to delete the delete marker and restore the object easily.

S3 – Access Control

When you create an S3 bucket, it defaults to private. Only the creator/owner can read the content of the bucket. You can further use S3 access policies to control access to an s3 bucket or an s3 object. There are two main access control policies.

  1. Resource-Based Policies
    • Bucket policy
    • ACL (Access Control Lists)
  2. User-Based Policies (IAM policies)

Both of these policy types are JSON based policies. Resource-based policies are applied at S3 bucket or an S3 object whereas User-based policies are applied to the IAM users who are accessing S3 to work with objects.

An S3 ACL is a sub-resource that’s attached to every S3 bucket and object. It defines which AWS accounts or groups are granted access and the type of access. When you create a bucket or an object, Amazon S3 creates a default ACL that grants the resource owner full control over the resource.

As a general rule, AWS recommends using S3 bucket policies or IAM policies for access control. S3 ACLs is a legacy access control mechanism that predates IAM.

How to Build a Product that Dominates the Market!

I’ve been developing software products for over 5 years now. It’s not a very long time, but within these years my team was able to build a product that captured the marketplace and generates positive cash flow for the company. In this blog post, let me share some of my learnings and pointers to build a market-winning product that can be applied to any type of projects (be it a small, medium or large).

A Market Winning Product

What really is a Market Winning Product? Well, any product that wins the market has the following common characteristics.

  • It is easy to sell
  • It keeps the existing customers happy
  • It acquires new customers all the time
  • It attracts the competitors’ customers
  • It continuously evolves to stay the best

If your product has the above characteristics, Congratulations! you got a market-winning product. If not keep reading.

Get Everybody on the Same Page

You may be building your own product or you are working for somebody else. Either case, You must be sold to your product first. Not only yourself but also your teams i.e developers, QA guys, marketers, CTO, CEO, and everybody else must be 100% sold to the product you build and believe it is the best product that delivers great value to you and the customers.

When everybody in your team honestly believes that your product delivers great value, they will do whatever it takes to make it a winning product. Let me give you an example. If you are a salesperson, wouldn’t that be very easy for you to sell your product if you know how your product delivers value to yourself? You wouldn’t have to use cheap sales gimmicks in the sales process instead share your experience with the customer to close the deal.

Let’s take another example. If you are a developer of the product, wouldn’t that be easy for you to understand the pain points of your product as a consumer? You would also know what new features you’d really love to see in your product as well.

So, if your team believes on the product, everybody will be on the same page and always wanted to make it a winning product. Changing your team’s mindset is the foundation for the success of your product. With that right mindset, everything else mentioned in this post will be easy to achieve.

It’s All about Synergy

Most companies have multiple teams dedicated to a product. One team develop the product, Another team assures the quality of the product, Another team handles operations and another team does sales and marketing of the product. If the teams are synergized, trust and depend upon each other, there will be very fewer conflicts among teams. Everybody thinks they are equals and all that they do is contribute to building a great product that serves people.

Let me give an example, imagine that you are building a “GDPR Compliance Platform”. After the first release, marketers start the marketing process. This tool is new to the marketplace, hence there will be many feedbacks from potential customers. When these concerns are brought forward to developers, they should understand the importance of these new changes in order to convert those leads into actual paying customers. Same time, with regular and transparent communication between the marketing team and the development team, they will have a clear idea about what new features to be developed and what improvements are required. Hence, while marketers promise new features to potential buyers, the development team can focus on implementing the features.

In order to have synergy among teams, it is absolutely vital to have all the teams on the same page. So how to do that, remember the first point, They must be completely sold to the product they are building and do whatever it takes to make it a winning product.

Defeating the Obscurity

Obscurity is the number one challenge to any new product. No matter how great your product and its content is, if people don’t know about it, they will not buy it and your company will eventually fail.

How would you address obscurity? Both the marketing team and the development team must assume responsibility for that. Marketing team must exhaust all mediums to get your product out to the potential buyers. Be it offline marketing or online marketing, they must understand effective marketing strategies and invest the marketing budget on them. For example, it could be social media campaigns, search engine optimization, training videos, events, conferences, and even cold-calling customers.

How can the development team help to defeat marketplace inertia? It’s very important to make the user onboarding process a breeze. Potential buyers must be able to try out the product with minimum entry barrier. For example, users should be able to self sign up for the product (maybe via a trial subscription) and interact with it. They need enough time to build trust and confidence with the product before they make the purchasing decision. Another important area that developers should focus on, is the “User Experience (UX)” of your product. As we all know, the first impression matters. If users that are trying out your product, don’t find it intuitive and easy to use, they will probably pass the deal.

How about the ability of your product to integrate with other major products? When you are starting off, it is absolutely important to build trust with the customers. If you could build partnerships with major players who are already trusted by millions of people, your product and the brand will also prove its trustworthiness. So it is important for your development team to build the product that is easily pluggable with other products with standard integration technologies.

Customer Satisfaction vs Customer Acquisition

Let’s imagine you are in the early stages of your product. You have several paying customers. At this stage, which one is the most important? Existing customer satisfaction or new customer acquisition? In my opinion, customer satisfaction should not even be a concern for your product. Because you and the team have developed a culture within the team, that you always over-deliver to your customers as your team is highly engaged with the product and understands what customers really need and do whatever it takes to accomplish them. Of course, you will receive customer complaints from time to time. While handling them effectively, you should always focus on new customer acquisition simply because you have a strong belief that the customers who aren’t using your product are already unsatisfied and they don’t even know it.

Never be Satisfied, Always Improve

When your product has captured the attention of the market place and acquired many customers, you should never be satisfied with the current state of the product. It’s time to use all the resources within the team if not find/outsource new resources to invest in innovation. You should set the bar so high, where your competition will think that you are out of their league, thus dominate the market place.

When it comes to innovation, you should expand the thinking in all areas of the product. It could be technological innovations such as Machine Learning, Artificial Intelligence, Blockchain etc… or It could be UI/UX innovations to improve the usability of the product or it could be anything else really. You and the team must pay attention to new trends in the market that could be mainstream in the future and adjust your product to become the early adopters of that area.

Summary

In this blog post, I discussed what it takes to build a product that dominates the market place. Everything starts with a change of mindsets. If you and your team don’t trust and believe your product in the first place, why would anyone else want to buy it? With that changed mindsets of the right set of people in the team, you will find that all the above points are easy to achieve to drive your product to dominate the market place.

Improving the UX of your website with an intelligent chatbot | AWS Lex

Note: The video series of this blog post is available here.

User Experience is one of the most important concerns in building modern web applications. No matter how feature-rich is your website, if people don’t find it intuitive to use, you will not reach out to your potential customers.

AWS Lex allows users to easily interact with your website using natural language via a conversational chatbot. Your users can chat in voice or in text and use your product services without having to go through complex user interfaces.

As a developer, you don’t have to be a Machine Learning/Deep Learning expert to have a chatbot embedded into the website. AWS Lex provides advance deep learning functionalities and automatic speech recognition techniques out of the box.

In this post, we are going to create a conversational bot that finds weather information as per user’s requests. Following are the technologies that we will be using in this project.

  1. Amazon Lex
  2. Amazon Cognito
  3. AWS Lambda
  4. AWS IAM
  5. Amplify Library
  6. Angular Framework

Please find the github repo related to this post at https://github.com/mjzone/weather-bot

Getting the bot ready for training

First, let’s create our bot and get it ready for training. We want our bot to search for weather information when a user has requested it. In this guide, let’s only consider communication via text.

Login to AWS console and Go to AWS Lex

If you haven’t created a bot before select get started and you will be directed to the create bot window.

Creating a Custom Bot

We can either select already created bot or create our custom bot. Let’s select Custom Bot.

Let’s give our bot a name(I named it “WeatherBot”)and fill the other configurations as above image. Since we don’t enable voice select text-based application option. Afterward, click create and you will be presented with a new screen as below.

We need to understand the terminology for our bot in AWS Lex. There are 5 main points to remember.

  1. Intents
  2. Utterances
  3. Slots
  4. Prompts
  5. Fulfillment

Intents are the intentions why someone would use the bot. In our example, someone might want to know the weather of a particular city in the world. We will have an intent called “FindWeather”. We can have more than one intents for the bot. Another intent would be “GreetUser”. Utterances are how a user interacts with the bot stating different phrases. An example utterance can be “How is the weather in Colombo?”

When a user utters a phrase in text or invoice, the bot will match it to a corresponding intent. “How is the weather in Colombo” utterance will be matched to “FindWeather” intent. In order to fulfill the user’s intent, the bot will ask other required questions from the user. These questions are called “Prompts”. Once a user replied to a prompt that will be stored in variables for later use. These variable are called “Slots”. When all the required slots for a intent is collected, the bot will fulfill the intention of the user. The “fulfillment” may involve calling a 3rd party service to search for information, talking to a database, executing some logic in lambda function etc… Our example WeatherBot will talk to OpenWeatherMap API with the user requested city to search for weather information and send it back to the user.

Creating Intents

Click “Create Intent” button to create the first intent for our WeatherBot. Let’s call it “FindWeather” and Add it.

Now let’s add some sample utterances a user might ask. They will help our chatbot to learn about user inputs.

I have added three utterances as shown above. Note that two of those utterances has {City} variable. This is a required slot value for FindWeather intent. We can fill the slot using the user’s utterance itself. “Tell me about the weather” utterance doesn’t have {City} slot involved. So the bot will question the user about the city using a prompt.

In the Slots section, let’s add our City slot. We can define the Prompt message together with that. Our bot can use the intent to get the City slot filled from the user if it didn’t receive it already from the user’s initial utterance.

Our bot requires only one variable to search for the weather. That is the City. Once it is received from the user, the bot will call the action for Fulfillment of the intent.

In our case, let’s call a Lambda function that talks to OpenWeather API to search the weather for the requested City.

Creating the Lambda Function

Let’s use the Serverless Framework to create our lambda function. If you haven’t already installed Serverless Framework please visit this link.

Once you have installed the serverless framework and configure credentials with your AWS account, create a serverless service using below command.

serverless create --template aws-nodejs --path weather-bot

Once creation is completed, change directory into weather-bot folder and open the files in your favorite IDE.

serverless.yml

Let’s add a simple function as shown above. I will call it getWeather. The logic of the function lies in the handler.js file.

Lambda logic to obtain weather info from OpenWeatherMap API

I have obtained a free API key fromhttps://openweathermap.org/api and added it as a query parameter(APPID) in the URL. I extracted the “City” slot value that was taken by the user and passed it in the URL as well. “units” parameter is set to “metric” in order to get temperature values in Celsius.

I also used “axios” npm library to easily send HTTP requests to the server. You may also use the default “http” node module if you prefer. Note how the answer variable is constructed. It is created with string concatenation to form a natural language like the response. This response is returned from the lambda function as a special JSON object. This is required for our bot to read the answer properly. For more information about request/response JSON object template supported by AWS Lex see this link from official documentation.

Alright! Let’s deploy our lambda function using the following command. Before that, don’t forget to install axios library from npm

// First run
npm install axios
// Then run
serverless deploy

Once it is successfully deployed, go back to Lex console and select the lambda function name under the Fulfillment section.

Testing our bot

We have done all the configuration required from our side. Now let’s build the bot and let AWS Lex train its neural network. Once the build is complete we can test our bot.

Once it showed the success message, you can start testing it in the console itself.

The bot further asks a question since we didn’t provide it the city name.

As you can see, our bot successfully connected to OpenWeatherMap API sent us the Weather information for London city.

Adding our bot to the website

Now that we have an awesome bot, we need to add it to our production website so our users can directly interact with it.

Let’s create an angular website and use AWS Amplify library to connect to our WeatherBot securely. We need to install angular CLI and amplify CLI globally and configure amplify to with AWS credentials.

// Install angular cli globallynpm install -g @angular/cli
// Create a new angular projectng new my-bot-website --style=scss --routing
// Install amplify library globally
npm install -g @aws-amplify/cli
// Configure amplify with AWS IAM credentials
amplify configure

In order to communicate with our bot securely, we have to make sure only the logged in user can talk to it. As for the next steps let’s add a Login to the website and then use Amplify out of the box interaction component to connect to our WeatherBot.

Once a user is successfully logged into the application, AWS Cognito assigns him an IAM role. The Permission/Policies for invoking backend services are associated with this role. Since our user will communicate with the AWS Lex Chatbot, we need to provide permission for that authenticated role to call AWS Lex services. You can find the IAM role name that is assigned to logged in user at Cognito Identity Pool configuration.

I’m not going to add those steps in this blog as this post is already lengthy. Instead, let me share the Github URL for the code here.

https://github.com/mjzone/weather-bot

You can clone the code from the repo and run amplify init to initialize with your AWS resources. To run it on browser use amplify serve

You can use Amplify library itself to host your website along with the bot in an S3 bucket.

I hope someone will find this post useful.

Cheers!

Sentiment Analysis with AWS Comprehend | AI/ML Series

In the last post we discussed on how to add speaking ability to our applications using AWS Polly. Let’s extend the same example to analyze the sentiments of the text that user types.

As usual, I recommend to watch the following video before reading this blogpost and use this post as a reference when building out the application by your own.

AWS Comprehend Service

AWS comprehend uses NLP to extract the insight about the content without needing any preprocessing requirements. It is capable of recognizing Entities, Languages, Sentiments, Key Phrases and other common elements of the given text or the document. One of the common use case of AWS Comprehend is to analyze the social media feed about your product and take necessary actions upon analyzing users valid sentiments.

Calling Comprehend API Methods

Let’s use AWS Lambda, our serverless function to talk to AWS Comprehend service and do a sentiment analysis. We are going to be using the API methods detectSentiment and detectDominantLanguage from AWS Comprehend javascript SDK. Refer the full SDK documentation here.

Firstly, we are creating an endpoint that triggers the Lambda function. Goto your serverless.yml and add this piece of code.

functions:
analyze:
handler: handler.analyze
events:
- http:
path: analyze
method: post
cors: true

It will create a new endpoint in the API Gateway with the path /analyze that will trigger analyze Lambda function. Here is the analyze function code which needs to be in the handler.js.

module.exports.analyze = (event, context, callback) => {
let body = JSON.parse(event.body);

const params = {
Text: body.text
};

// Detecting the dominant language of the text
comprehend.detectDominantLanguage(params, function (err, result) {
if (!err) {
const language = result.Languages[0].LanguageCode;

const sentimentParams = {
Text: body.text,
LanguageCode: language
};

// Analyze the sentiment
comprehend.detectSentiment(sentimentParams, function (err, data) {
if (err) {
callback(null, {
statusCode: 400,
headers: {
"Access-Control-Allow-Origin": "*"
},
body: JSON.stringify(err)
});
} else {
callback(null, {
statusCode: 200,
headers: {
"Access-Control-Allow-Origin": "*"
},
body: JSON.stringify(data)
});
}
});
}
});
}

At the top of the handler function, you need a reference to the Comprehend API from AWS-SDK. Then let’s first identify the dominant language of the text by calling detectDominantLanguage API method and pass that language code to the next API call detectSentiment inside the callback of the first method.

As a result, you will get the matching Sentiment and the matching percentage of Negative, Positive, Neutral and Mixed sentiment. Now, send that back to the frontend.

IAM Permission for AWS Comprehend

We are now almost finished with the backend, except we have to add a policy that allows AWS Comprehend permission to the IAM role attached to the Lambda function. If you haven’t read the part 01 of this series, read/watch it where I showed you how to setup an IAM role for the lambda.

Our IAM role was youtube-polly-actual-role. It had an arn and we refereed it in the serverless.yml file as follows.

arn:aws:iam::<account-id>:role/youtube-polly-actual-role

Goto IAM console of your AWS account and attach a new policy to that same role as shown below.

Setting up the Frontend

We have been using an Angular app as the frontend in the earlier project. Let’s continue adding a button below the user text area and call our API endpoint.

Goto app.component.html and add this simple html code to display an additional button next to “speak” button. We will display the returned sentiment value with a color below the button as well.

<div style="margin: auto; padding: 10px; text-align: center;">
<h2>Write Something...</h2>
<div>
<textarea #userInput style="font-size: 15px; padding: 10px;" cols="60" rows="10"></textarea>
</div>
<div>
<select [(ngModel)]="selectedVoice">
<option *ngFor="let voice of voices" [ngValue]="voice">{{voice}}</option>
</select>
</div>
<div style="margin-top: 10px">
<button style="font-size: 15px;" (click)="speakNow(userInput.value)">Speak Now</button>
<button style ="font-size: 15px;" (click)="analyze(userInput.value)">Analyze</button>
</div>

<!-- Following section will show the returned sentiment value with a suitable color -->

<div>
<h2 *ngIf="sentiment=='POSITIVE'" style="color: green;">{{sentiment}}! </h2>
<h2 *ngIf="sentiment=='NEUTRAL'" style="color: orange;">{{sentiment}} </h2>
<h2 *ngIf="sentiment=='NEGATIVE'" style="color: red;">{{sentiment}}! </h2>
</div>
</div>

Let’s add the analyze function in the app.compotent.ts file and make use of a service to call the API Gateway Endpoint.

import { Component } from '@angular/core';
import { APIService } from './api.service'

@Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.scss']
})

export class AppComponent {
sentiment = null;
constructor(private api: APIService){}

analyze(input) {
let
data = {
text: input
}
this
.api.analyze(data).subscribe((result:any) => {
this
.sentiment = result.Sentiment;
});
}

}

Let’s call frontend API service to call the /analyze endpoint and return the data. Goto api.service.ts and add this code.

import { Injectable } from '@angular/core';
import { HttpClient } from '@angular/common/http';

@Injectable({
providedIn: 'root'
})
export class APIService {

ENDPOINT = 'https://461xegl8zf.execute-api.us-east-1.amazonaws.com/dev';

constructor(private http:HttpClient) {}

speak(data) {
return this.http.post(this.ENDPOINT + '/speak', data);
}

analyze(data) {
return this.http.post(this.ENDPOINT + '/analyze', data);
}
}

Our frontend is now completed. It will send the user input to the backend endpoint and lambda function will figure out the language of the text and send sentiment analysis.

Result

Cheers!

Building a Talking App | AI/ML Series

Welcome to another practical AWS tutorial. This is the written version of following YouTube video. I would recommend you to watch the video before reading the blog. Use this blog as the source to copy the code and practice by building the app by yourself.

AWS services used in the App

  • Amazon Polly
  • Amazon S3
  • AWS IAM
  • AWS Lambda

Creating a Serverless Project/Service

Install serverless framework by with npm and create a new nodejs project/service called backend.

npm install serverless -g
serverless create --template aws-nodejs --path backend

Now replace to serverless.yml file with following code, that creates a lambda function called “speak”.

service: talking-backend 

provider:
name: aws
runtime: nodejs8.10
region: us-east-1
role: arn:aws:iam::<account-id>:role/talking-app-role

functions:
speak:
handler: handler.speak
events:
- http:
path: speak
method: post
cors: true

The “speak” lambda function will send the text payload to AWS Polly and return the voice file from S3 bucket.

Creating a S3 Bucket

We need a S3 bucket to store all the voice clips that is returned by AWS Polly. Use AWS console to create the bucket with a unique name. In my case S3 bucket name is “my-talking-app”.

Create an IAM Role

Serverless framework creates two Lambda functions that interact with AWS Polly and AWS S3 services. (We shall see the code later in the blog). In order to communicate with these services, our Lambda function must be assigned an IAM role that has permission to talk to S3 and Polly. So create an IAM role with a preferred name i.e. “talking-app-role” with the following IAM policy.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "polly:*",
                "s3:PutAccountPublicAccessBlock",
                "s3:GetAccountPublicAccessBlock",
                "s3:ListAllMyBuckets",
                "s3:HeadBucket"
            ],
            "Resource": "*"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::my-talking-app",
                "arn:aws:s3:::my-talking-app/*"
            ]
        }
    ]
}

Copy the ARN of the IAM role and add it under the provider section of the serverless.yml file.

provider:
name: aws
runtime: nodejs8.10
region: us-east-1
role: arn:aws:iam::885121665536:role/talking-app-role

“Speak” Lambda Function

Speak Lambda function does three main tasks.

  1. Call AWS Polly synthesizeSpeech API and get the audio stream (mp3 format) for text that user entered
  2. Save the above audio stream in the S3 bucket
  3. Get a signed URL for the saved mp3 file in the S3 and send it back to the frontend application

First of all, let’s install the required npm modules inside the backend folder.

npm install aws-sdk 
npm install uuid

AWS Polly synthesizeSpeech API requires the text input and the voice id to convert the text into speech. Here, we use the voice of “Joanna” to speak the text that is passed from the frontend.

let AWS = require("aws-sdk");
let polly = new AWS.Polly();
let s3 = new AWS.S3();
const uuidv1 = require('uuid/v1');

module.exports.speak = (event, context, callback) => {
let data = JSON.parse(event.body);
const pollyParams = {
OutputFormat: "mp3",
Text: data.text,
VoiceId: data.voice
};

// 1. Getting the audio stream for the text that user entered
polly.synthesizeSpeech(pollyParams)
.on("success", function (response) {
let data = response.data;
let audioStream = data.AudioStream;
let key = uuidv1();
let s3BucketName = 'my-talking-app';

// 2. Saving the audio stream to S3
let params = {
Bucket: s3BucketName,
Key: key + '.mp3',
Body: audioStream
};
s3.putObject(params)
.on("success", function (response) {
console.log("S3 Put Success!");
})
.on("complete", function () {
console.log("S3 Put Complete!");
let s3params = {
Bucket: s3BucketName,
Key: key + '.mp3',
};

// 3. Getting a signed URL for the saved mp3 file
let url = s3.getSignedUrl("getObject", s3params);

// Sending the result back to the user
let result = {
bucket: s3BucketName,
key: key + '.mp3',
url: url
};
callback(null, {
statusCode: 200,
headers: {
"Access-Control-Allow-Origin" : "*"
},
body: JSON.stringify(result)
});
})
.on("error", function (response) {
console.log(response);
})
.send();
})
.on("error", function (err) {
callback(null, {
statusCode: 500,
headers: {
"Access-Control-Allow-Origin" : "*"
},
body: JSON.stringify(err)
});
})
.send();
};

Now, deploy the backend API and the Lambda function

sls deploy

Frontend Angular App

In order to test our backend we need a frontend that makes speak request with user inputted text. So let’s create an angular application.

ng new client
? Would you like to add Angular routing? No
? Which stylesheet format would you like to use? SCSS

Let’s create an angular service that talks to our Amazon Polly backend.

ng g s API

Add the following code to the api.service.ts file. It will create speak function that call the lambda function with the selected voice and the inserted text by the user.

import { Injectable } from '@angular/core';
import { HttpClient } from '@angular/common/http';

@Injectable({
providedIn: 'root'
})
export class APIService {

ENDPOINT = '<YOUR_ENDPOINT_HERE>';

constructor(private http:HttpClient) {}

speak(data) {
return this.http.post(this.ENDPOINT, data);
}
}

Let’s use the main appcomponent to render our UI for the “Talking App”. Goto app.component.html and replace the file with following HTML code. It will add a basic text-area, selection of preferred voice and speak action button.

<div style="margin: auto; padding: 10px; text-align: center;">
<h2>My Talking App</h2>
<div>
<textarea #userInput style="font-size: 15px; padding: 10px;" cols="60" rows="10"></textarea>
</div>
<div>
<select [(ngModel)]="selectedVoice">
<option *ngFor="let voice of voices" [ngValue]="voice">{{voice}}</option>
</select>
</div>
<div style="margin-top: 10px">
<button style="font-size: 15px;" (click)="speakNow(userInput.value)">Speak Now</button>
</div>
</div>

Goto app.component.ts file and add the corresponding hander function for the view. Replace it with the following code.

import { Component } from '@angular/core';
import { APIService } from './api.service'

@Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.scss']
})
export class AppComponent {
voices = ["Matthew", "Joanna", "Ivy", "Justin"];
selectedVoice = "Mattew";

constructor(private api: APIService){}

playAudio(url){
let audio = new Audio();
audio.src = url;
audio.load();
audio.play();
}

speakNow(input){
let data = {
text: input,
voice: this.selectedVoice
}
this.api.speak(data).subscribe((result:any) => {
this.playAudio(result.url);
});
}
}

Since we are using, ngModel in the app.component.html we need to import the FormsModule in the app.module.ts file. Goto app.module.ts file and replace the content with,

import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { HttpClientModule } from '@angular/common/http';
import { AppComponent } from './app.component';
import { FormsModule } from '@angular/forms';
@NgModule({
declarations: [
AppComponent
],
imports: [
FormsModule,
BrowserModule,
HttpClientModule
],
providers: [],
bootstrap: [AppComponent]
})
export class AppModule { }

Running the Application

Now that our backend and the frontend are ready, let’s play with our app.

Go to the client directory and run the angular app locally,

ng serve

Type some text on the text area and select a voice from the dropdown. When you click Speak Now it should speak the text aloud!

Cheers!

Building a Profile App – Part 02

In Part 01 we started building the Profile app with Amplify as the frontend library. We managed to save the user information on a DynamoDB table via a GraphQL API.

In this second part, let’s add following features and improvements to our app.

  1. Securely uploading the profile image
  2. Loading the saved user data
  3. Implement an auth guard for the profile page to avoid unauthorized access
  4. Automatically redirect to the profile page after a successful login

Configuring Storage Category with Amplify

For the sake of this application let’s allow a user to view only the profile picture of himself. (May be it makes no sense, but I want to show how to use private images using amplify storage service).

Okay, let’s use two higher order components from Amplify Angular Library to make this task very easy.

  • <amplify-photo-picker></amplify-photo-picker>
  • <amplify-s3-image></amp

amplify-photo-picker allows users to upload image to S3. We can pass different storage options for our liking. It supports three storage options i.e. public, protected and private. We are going to use private level that only allows the owner to view and upload the image.

But hey, before that let’s add Storage category with amplify that will create a S3 bucket for us. So, get a command prompt and run following commands.

amplify add storage

? Please select from one of the below mentioned services:
Content (Images, audio, video, etc.)
? Please provide a friendly name for your resource that will be used to label this category in the project: s38e43106 a
? Please provide bucket name: profileapp03f4977230524d1e977654540b6c1924
? Who should have access: Auth users only
? What kind of access do you want for Authenticated users: read/write

amplify push

Securely uploading the profile image

Now let’s bring on those two components to the profile.component.html

 <h2>My Profile</h2>
<div class="form-group row">
<div class="col-sm-12">
<div class="md-form mt-0">
<mdb-icon *ngIf="showPhoto" fas icon="upload" (click)="editPhoto()" size="2x" class="upload-icon"></mdb-icon>
<!-- Display Image -->
<amplify-s3-image [path]="user.imageUrl"
[options]="{'level': 'private'}" *ngIf="showPhoto">
</amplify-s3-image>

<!-- Photo Picker -->
<amplify-photo-picker *ngIf="!showPhoto"
path="image"[storageOptions]="{'level': 'private'}" (uploaded)="onImageUploaded($event)">
</amplify-photo-picker>
</div>
</div>
</div>
<form> ...

Edit the profile.component.ts as follows.

export class ProfileComponent implements OnInit {
...
showPhoto: boolean;
userCreated: boolean;

async onImageUploaded(e) {
this.user.imageUrl = e.key;
if (this.userCreated) {
await this.api.UpdateUser({
id: this.userId,
image: this.user.imageUrl
});
}
this.showPhoto = true;
}

editPhoto() {
this.showPhoto = false;
}

getType(): string {
return this.userCreated ? 'UpdateUser' : 'CreateUser';
}

async updateProfile() {
const user = {
id: this.userId,
username: this.user.firstName + '_' + this.user.lastName,
firstName: this.user.firstName,
lastName: this.user.lastName,
bio: this.user.aboutMe,
image: this.user.imageUrl
}
await this.api[this.getType()](user);
}
...
}

Loading Saved Data

At this point our application is managed to store profile information on DynamoDB table and store the profile image on a S3 bucket. However, when we reload the webpage, all information is disappeared. Let’s fix that by fetching the saved data upon profile component loading.

We are going to update ngOnInit lifecycle method to load up the user data and populate User model which will automatically bind to our angular form.

...  
ngOnInit() {
this.showPhoto = false;
Auth.currentAuthenticatedUser({
bypassCache: false
}).then(async user => {
this.userName = user.username;
this.userId = user.attributes.sub;
let result = await this.api.GetUser(this.userId);
if (!result) {
this.userCreated = false;
this.user = new User('', '', '', '', '', '');
} else {
this.userCreated = true;
this.showPhoto = !!result.image;
this.user = new User(
this.userId,
result.username,
result.firstName,
result.lastName,
result.bio,
result.image
)
}
})
.catch(err => console.log(err));
}
...

Logout Functionality

Now that we have almost finished with the profile app functionalities, let’s add a method to logout for authenticated users.

In profile.component.ts file add the following method that calls the signOut method of Auth api.

import { Router } from '@angular/router';
...
constructor(private api: APIService, private router: Router) {}
...
logOut() {
Auth.signOut({ global: true })
.then(data => {
this.router.navigate(['/auth']);
})
.catch(err => console.log(err));
}

Make sure to bind this function for the click event of the Logout link in the template.

Configuring Auth Guards

Currently, we have two basic routes. One for the login screen and the other for our profile component. We must not allow to load profile component unless the user is logged in. We can achieve that using an auth guard.

Create an auth guard with,

ng g guard auth

Here is the code for auth guard service.

import { Injectable } from '@angular/core';
import { CanActivate, Router } from '@angular/router';
import { Auth } from 'aws-amplify';

@Injectable({
providedIn: 'root'
})
export class AuthGuard implements CanActivate {
constructor(private router: Router) {}

canActivate(): Promise < boolean > {
return new Promise((resolve) => {
Auth.currentAuthenticatedUser({
bypassCache: false
})
.then((user) => {
if(user){
resolve(true);
}
})
.catch(() => {
this.router.navigate(['/login']);
resolve(false);
});
});
}
}

Auth.currentAuthenticatedUser Api call always returns the currently authenticated user. If there is no currently authenticated user Auth guard will be resolved as false and Profile component will not be activated.

So now, let’s add that auth guard to guard our Profile component in app-routing.module.js

import { AuthGuard } from './auth.guard';
...
const routes: Routes = [{
path: "profile",
component: ProfileComponent,
canActivate: [AuthGuard]
},
{
path: "login",
component: AuthComponent
},
{
path: '**',
redirectTo: 'login',
pathMatch: 'full'
}
];

Automatic Redirection After Login

Finally, let’s add automatic redirection to profile page once a user is successfully authenticated. We can accomplish this by listening to authStateChange$ events generated by Amplify library.

Goto auth.component.js file and add following code.

import { AmplifyService } from 'aws-amplify-angular';
import { Router } from '@angular/router';

constructor(public amplifyService: AmplifyService, public router: Router) {
this.amplifyService = amplifyService;
this.amplifyService.authStateChange$
.subscribe(authState => {
if (authState.state === 'signedIn') {
this.router.navigate(['/profile']);
}
});
}

Okay. Now we can run the application and check if everything works. Login with a registered user. Make sure you will be redirected to the Profile page. Then update the profile information with a profile image and make sure the information is persisted.

ng serve

You will still see the Amplify sign-In page for a second before the redirection. In order to hide that default component pass the “hide” input to <amplify-authenticator>

<amplify-authenticator [hide]="['Greetings']"></amplify-authenticator>

Final Page

I hope this post has been useful. Please find the github repo of this example project at https://github.com/mjzone/amplify-user-profile

Cheers!

Building a Profile App – Part 01

This blog post is connected to the following youtube video. I would recommend you to watch the video first and use this blog post to copy the code snippets and build the application by yourself.

Watch the video here

Creating the Angular App

Let’s start a new angular application using ng new command.

ng new profileApp

Select YES for angular routing and select SCSS when prompted from the CLI. After the project is created, change directory into the profileApp folder by,

cd profileApp

Now let’s create two components for Login page and Profile landing page.

ng g c auth
ng g c profile

Installing Amplify Libraries

It’s time to add amplify and aws-appsync libraries. Firstly, install the amplify cli globally and configure it with your AWS account.

npm install -g @aws-amplify/cli
amplify configure

Afterwards, we need to install amplify, amplify-angular, app-sync and graphql-tag libraries as we are to use them in our profile app.

npm install --save aws-amplify
npm install --save aws-amplify-angular

Additional configuration for the Angular App

We need to add some polyfills and additional configurations to get amplify and appsync work with our angular application. Otherwise you’ll waste much time troubleshooting errors.

In the polyfills.ts file (src/polyfills.ts) add following two lines on top of the file.

(window as any).global = window; 
(window as any).process = { browser: true };

Also goto index.html (src/index.html) and add the following script within the head tags.

<script>
if(global === undefined) {
var global = window;
}
</script>

Now goto tsconfig.app.json (src/tsconfig.app.json) and add “node” as the compilerOptions type.

{
"extends": "../tsconfig.json",
"compilerOptions": {
"outDir": "../out-tsc/app",
"types": ["node"]
},
"exclude": [
"test.ts",
"**/*.spec.ts"
]
}

Initializing an Amplify Project on Cloud

At this point, we can initialize an amplify project using the amplify cli.

amplify init
## Provide following answers when prompted

? Enter a name for the project profileApp
? Enter a name for the environment dev
? Choose your default editor: Visual Studio Code
? Choose the type of app that you're building javascript
Please tell us about your project? What javascript framework are you using angular
? Source Directory Path: src
? Distribution Directory Path: dist/profileApp
? Build Command: npm run-script build
? Start Command: ng serve
## Choose your aws profile when prompted as well

After the process is completed, let’s add two amplify categories for auth and api.

amplify add auth
## Provide following answer for the prompt

Do you want to use the default authentication and security configuration? Yes, use the default configuration.
amplify add api
## Provide following answers for the prompts

? Please select from one of the below mentioned services GraphQL
? Provide API name: profileapp
? Choose an authorization type for the API Amazon Cognito User Pool
Use a Cognito user pool configured as a part of this project
? Do you have an annotated GraphQL schema? No
? Do you want a guided schema creation? Yes
? What best describes your project: Single object with fields (e.g., “Todo” with ID, name, description)
? Do you want to edit the schema now? Yes

Now, amplify will open the schema.graphql file with sample model. While command prompt is open, replace the content with the following graphql model and save the file and press enter to continue in the command prompt.

type User @model {
id: ID!
username: String!
firstName: String
lastName: String
bio: String
image: String
}

At this point, we have created the templates for all the AWS resources locally. We need to push the template and actually create the services. To do that type,

amplify push
## Provide following answers when prompted

? Are you sure you want to continue? Yes
? Do you want to generate code for your newly created GraphQL API Yes
? Choose the code generation language target angular
? Enter the file name pattern of graphql queries, mutations and subscriptions src/graphql/*/.graphql
? Do you want to generate/update all possible GraphQL operations - queries, mutations and subscriptions Yes
? Enter maximum statement depth [increase from default if your schema is deeply nested] 2
? Enter the file name for the generated code src/app/API.service.ts

It will take few minutes to provision the resources on AWS. Be patient 🙂

Configuring Amplify Libraries with the App

Now that we have configured the resources on AWS, it creates a new file i.e. aws-exports.js file with all the configuration details of those services in the frontend directory structure.

Let’s use that file and setup the initiate connection from Angular frontend to AWS backend.

Goto main.ts (/src/main.ts) file and configure amplify.

import Amplify from 'aws-amplify';
import amplify from './aws-exports';

Amplify.configure(amplify);

Now let’s import amplify-angular library to use the already configured higher order components for our login.

Goto app.module.js and import AmplifyAngularModule and AmplifyService.

import {AmplifyAngularModule, AmplifyService} from 'aws-amplify-angular';

@NgModule({
declarations: [
AppComponent
...
],
imports: [
...
AmplifyAngularModule
],
providers: [AmplifyService]
})

Now we can use <amplify-authenticator></amplify-authenticator> component directly in the auth component html and implement a complete login functionality. (Magical!)

But before that let’s setup our routes in app-routing.module.js file. We have two basic routes. One for the login screen and the other for our profile component.

In the app-routing.module.ts file add the routes.

 const routes: Routes = [{
path: "profile",
component: ProfileComponent
},
{
path: "login",
component: AuthComponent
},
{
path: '**',
redirectTo: 'login',
pathMatch: 'full'
}
];

Adding the Login Component

It’s time to add the login screen. Goto auth.component.html and add this code. It will turn in to a login screen.

<amplify-authenticator></amplify-authenticator>

Before running the application to check login screen, you need to update the styles in of amplify-authenticator in styles.scss file.

Add this line of css in the style.scss file (src/styles.scss)

@import '~aws-amplify-angular/theme.css';

We need to remove the default content that angular has added in app.component.html file. So let’s do that too. Your app.component.html should look like this, when you remove the default code

<router-outlet></router-outlet>

Okay. Now let’ run ng serve and check the output!

Figure 01 – Login Page

Styling with MDBootStrap

Now we need to build the Profile component. But before that, let’s configure MDBootStrap with our project to add styles to the profile component easily.

npm i angular-bootstrap-md --save

npm install -–save chart.js@2.5.0 @types/chart.js @types/chart.js @fortawesome/fontawesome-free hammerjs 

To app.module.ts add,

import { MDBBootstrapModule } from 'angular-bootstrap-md'; 

@NgModule({ imports: [ MDBBootstrapModule.forRoot() ] });

In the angular.json file replace styles and scripts sections with,

"styles": [
"node_modules/@fortawesome/fontawesome-free/scss/fontawesome.scss",
"node_modules/@fortawesome/fontawesome-free/scss/solid.scss",
"node_modules/@fortawesome/fontawesome-free/scss/regular.scss",
"node_modules/@fortawesome/fontawesome-free/scss/brands.scss",
"node_modules/angular-bootstrap-md/scss/bootstrap/bootstrap.scss",
"node_modules/angular-bootstrap-md/scss/mdb-free.scss",
"src/styles.scss"
],
"scripts": [
"node_modules/chart.js/dist/Chart.js",
"node_modules/hammerjs/hammer.min.js"
]

Adding the Profile Component

Now let’s edit the profile component. In profile.component.html, add following html code,

<!-- Navigation Bar -->
<header>
<nav class="navbar navbar-expand-lg navbar-dark default-color">
<a class="navbar-brand" href="#"><strong>Profile</strong></a>
<button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarSupportedContent"
aria-controls="navbarSupportedContent" aria-expanded="false" aria-label="Toggle navigation">
<span class="navbar-toggler-icon"></span>
</button>
<div class="collapse navbar-collapse" id="navbarSupportedContent">
<ul class="navbar-nav ml-auto">
<li class="nav-item">
<a class="nav-link" href="#"> Hello {{userName}}!</a>
</li>
<li class="nav-item active">
<a class="nav-link"> Logout <span class="sr-only">(current)</span></a>
</li>
</ul>
</div>
</nav>
</header>

<!-- Main Content -->
<main class="text-center my-5">
<div class="container">
<h2>My Profile</h2>
<form>
<div class="form-group row">
<label for="firstName" class="col-sm-2 col-form-label">First Name</label>
<div class="col-sm-10">
<div class="md-form mt-0">
<input type="text" class="form-control" id="firstName" name="firstName" [(ngModel)]="user.firstName">
</div>
</div>
</div>
<div class="form-group row">
<label for="lastName" class="col-sm-2 col-form-label">Last Name</label>
<div class="col-sm-10">
<div class="md-form mt-0">
<input type="text" class="form-control" id="lastName" name="lastName" [(ngModel)]="user.lastName">
</div>
</div>
</div>
<div class="form-group row">
<label for="aboutMe" class="col-sm-2 col-form-label">About Me</label>
<div class="col-sm-10">
<div class="md-form mt-0">
<textarea id="aboutMe" name="aboutMe" [(ngModel)]="user.aboutMe" class="form-control md-textarea" length="120"
rows="3"></textarea>
</div>
</div>
</div>
<div class="form-group row">
<div class="col-sm-3">
<button type="submit" class="btn btn-primary btn-lg" (click)="updateProfile()">Update</button>
</div>
</div>
</form>
</div>
</main>

In the form, we data binding to a model called user. Let’s add that model and import it to the profile.component.ts file.

// Generate a typescript class
ng g class User

Add following code to user.ts,

Since we need to use ngModel in the profile component, we should import FormsModule into the app.module.js

import { FormsModule} from '@angular/forms';

@NgModule({
imports: [
FormsModule,
...
]
)

Okay, now we need to implement updateProfile() function to grab the data from the form and store the data in the DynamoDB table.

export class User {
constructor(
public id: string,
public username: string,
public firstName: string,
public lastName: string,
public aboutMe: string,
public imageUrl: string
){}
}

In the profile.component.js file add,

import { Component, OnInit } from '@angular/core';
import { APIService } from '../API.service';
import { User } from '../user';
import { Auth } from 'aws-amplify';

@Component({
selector: 'app-profile',
templateUrl: './profile.component.html',
styleUrls: ['./profile.component.scss']
})

export class ProfileComponent implements OnInit {
userId: string;
userName: string;
user = new User('', '', '', '', '', '');

constructor(private api: APIService) {}

ngOnInit() {
Auth.currentAuthenticatedUser({
bypassCache: false
}).then(async user => {
this.userId = user.attributes.sub;
this.userName = user.username;
})
.catch(err => console.log(err));
}

async updateProfile() {
const user = {
id: this.userId,
username: this.user.firstName + '_' + this.user.lastName,
firstName: this.user.firstName,
lastName: this.user.lastName,
bio: this.user.aboutMe
}
await this.api.CreateUser(user);
}
}

updateProfile function will get the firstName, userName, lastName, and bio information from the form inputs.

But the “id” attribute has to be taken from currently authenticated user.

Now, let’s run ng serve and goto “/profile” path to view our profile page.

Figure 02 – Profile Page

In the second part of this blog we are going to add following functionalities to our profile app.

  • Loading the saved user data
  • Ability to securely upload the profile image
  • Adding auth guard for the profile component so that unauthorized users will not have access to profile page
  • Automatically redirecting to profile page after a successful login

So guys, I hope this has been useful to you. I’ll see you in the next part.

Stay tuned!