Budget Report 14/04/2017

I received another billing alarm last night, this time with respect to my EC2 budget.
EC2 Alarm Forecasted50

Upon further inspection, the source of expenditure was from AWS’ EBS service. This seemed strange to me as I haven’t used EBS during this DinoStore project.

In order to gain a greater understanding of my expenditure, I compiled my AWS bills into a spreadsheet.

Budget Spreadsheet

EBS charge for April has been $0.64

I originally thought that it was due to the MySQL RDS instances that I created for Lab 2, and considered that it may relate to the snapshots that I had created. The dates however, for the creation of the EBS volumes were all in March, and my RDS snapshots were created in April. Upon looking into greater detail of the EBS volumes, I was able to determine that they related to previous RDS instances that I had created for my QwikLabs project.

By looking into AWS’ documentation on EBS volumes, I was able to determine their use and their cost:

Device use:

Device Cost

My next step was to detach the EBS volumes in order to cease any further charge.
Detaching EBS volumes

As for my DinoStore budgeting; I have been using Free-Tier available services and so will not have incurred any other charges.


Introduction to Amazon Relational Database Service (RDS)-Linux

Introduction and Aim
The purpose of this lab is to create and use an Amazon Relational Database service through AWS. Amazon RDS is a cloud based service that deals with databases. Databases can be created, operated, and scaled within the Amazon RDS, which has the ability to make MySQL, PostGRE, Oracle, and SQL Server databases.



  • Create an Amazon Relational Database Service (RDS) instance
  • Connect to the RDS instance with client software


Creating a Relational Database Service (RDS) instance
RDS is a service of its own within the Amazon Management Console, rather than being one created through EC2. The lab script requires the MySQL database to have many specific features, These are as follows:

  1.          Specify DB Details
    InkedMySQL database option_LI
  2. DB Instance Class: db.t1.micro (The free tier one)
    DB Instance Class db.t1.micro
    AWS makes mention that this DB instance class is a ‘previous generation instance, that may have lower performance and higher cost than the newer generation.’ Because of this, I looked into the db.t2.micro, which is the current generation instance. The current instance has higher memory and network performance while still being on the free tier, so I will be using the current generation instance in this lab.
    DB Instance Class db.t2.micro
  3. Multi-AZ Deployment: No
  4. Storage Type: General Purpose (SSD)
  5. Allocated Storage: 5
  6. DB Instance Identifier: RDSLab
  7. Master Username: AWSMaster
  8. Master Password: AWS12345
  9. Confirm Password: AWS12345
    Specify DB Details Full
  10.          Next Step->Configure Advanced Settings
  11. Publicly Accessible: No
  12. VPC Security Group(s): Choose a security group containing the text qls. I’m using a security group that I’ve created as I don’t have access to the QwikLabs one.
  13. Database Name: RDSLab
    Configure Advanced Settings
  14. Backup Retention period: 0 days (to disable automatic backups)
    CAS Part 2
  15.           Launch DB Instance

Now that the database instance has been launched, it is important to double check the security groups of the selected VPC and make sure that the inbound rules contain: Type-MySQL/Aurora (3306) with Source-
Editing Inbound Rules of SG


Create an Amazon Linux instance from an Amazon Machine Image (AMI)
Under the EC2 Launch Instance, the Amazon Linux AMI is selected. The instance type is kept as default, which is t2.micro. The next steps, ‘Configure Instance Details’, and ‘Add Storage’, are kept with their default settings. In the ‘Tag Instance’ step, the value given for the name attribute is RDS Free Lab. The final step is to review and launch.
RDS Free Lab Instance

Connecting to Amazon EC2 instance via SSH
Once the instance is launched, the PuTTY Secure Shell client is used to connect to the server. This involves using the instance’s public DNS value into the PuTTY Host name box, prefixed by <ec2-user@>. In the category list, under the SSH option, the Auth option can be clicked which will provide a ‘Private key file for authentication’ box. This is where I use my private key that I’ve previously created.

Connecting to the RDS instance
Within the terminal that opens up, the command ‘sudo yum install mysql’ is typed in, and the install agreement is accepted.
Install mysql

Once installed connect to MySQL, the following text is typed in, with the endpoint name of the RDS instance.
‘mysql –host cjcfraykqpwn.rds.ap-southeast-2.amazonaws.com –password –user AWSMaster’.
This prompts for the AWS12345 password that was created earlier.
InkedEnter mySQL_LI
The darker text at the top is where I accidentally typed the command incorrectly.

The MySQL is now logged into, and the mysql> prompt is visible. The ‘show database’ command can be entered in order to check whether any records return.
mySQL Show DatabasesThe returned output shows that the RDS instance has been connected to successfully.


I found this to be an interesting lab with using bash to install MySQL and connect to the RDS. Prior to attempting the Linux RDS lab, I had attempted to complete the Windows RDS Lab. I’m curious to find out whether the Windows’ VM command tool would be as successful in connecting to the RDS.

Introduction to AWS Lambda

Introduction and Aim
The purpose of this lab is to gain a basic understanding of AWS Lambda through creating and ‘deploying a lambda function in an event driven environment’. -As stated in the QwikLabs lab script.
The labscript states that ‘Lambda is a compute service that runs code in response to events and automatically manages the compute resources, making it easy to build applications that respond quickly to new information.’ Lambda is serverless.



  • Create an AWS Lambda S3 event function
  • Configure an Amazon S3 bucket
  • Upload a file to an Amazon S3 bucket
  • Monitor AWS Lambda S3 functions through Amazon CloudWatch


Configure an Amazon S3 bucket as the Lambda event source
The first step in configuring an Amazon S3 is to determine the region the lab is running in. In my case, it’s Sydney. Under S3 services, the bucket I’ve created is called ql-lambda, and is set in my current region.

Create an S3 function
On the AWS console, Lambda is located in the Services. In the Lambda console, the ‘Get Started Now’ button is pushed, followed by the ‘New Function’ button.
Lambda The QwikLab instructions for creating the function are as follows:

Select Bluprint: S3-get-object
Configure Triggers: Set  bucket name to bucket that has just been created
Set Event Type to ‘Object Created (All)’
Enable Checkbox: Enable Trigger
InkedConfigure Triggers_LI

–> Next
Configure Function:
Name: S3Function
Description: S3 Function for Lambda
Runtime: Node.js
-There were two available .js nodes, so I chose Node.js 4.3
Configure part 1Handler: Leave as index.handler
Role: Choose an existing role
Existing Role: lambda-role
-As I’m not doing this through QwikLabs, there wasn’t an existing role called lambda-role. Instead, I  created a new role called lambda role. The lambda role contained two policies; Simple Microservice Permission, and S3 Object Read-Only Permission. I chose those two policies as they seemed to best fit the role required for this lab.
Configure part 2 (handler and role)
–> Advanced Settings
Memory (MB): 128
Timeout (s): 5

The final section involves the Review section, and then the function can be created.


Upload a file to an Amazon S3 bucket to trigger a Lambda event.
The next step is to upload a file to the S3 bucket in order to trigger a call to the Lambda function.
The file uploaded to the bucket, for the purpose of this test, contains only lowercase lettering with no spaces.
Bucket with Upload

In the Lambda functions page, the function itself can be clicked and then the ‘Monitoring’ tab can be opened. This will provide four graphs: Invocation count, Invocation duration, Invocation errors, and Throttled invocations.
Monitoring in Lambda
Below is a screenshot of the QwikLabs script, which explains what each graph measures.
Graph Explanation

All of this information can be viewed in CloudWatch. This can be accessed by clicking on the ‘View logs in CloudWatch’ button, which is located above the graphs. In the logs section of CloudWatch, the first log stream contains information on ‘Start Request’, ‘End Request’, and ‘Report Request’ of the associated lambda event.
cw log info


The information recorded when a Lambda event is triggered appears to be very informative for an overview of financial transactions. This sort of service implemented into a business may help keep track of business expenditure from staff.

Introduction to AWS CloudFormation

Introduction and Aim
The purpose of this lab is to use an Amazon EC2 instance and install WordPress with a local MySQL database. QwikLabs states that AWS CloudFormation ‘gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.’



  • Create a stack using an AWS CloudFormation template
  • Monitor the progress of the stack creation
  • Use the stack resources
  • Clean up when the stack is no longer required


Create a stack
In this section, I create a stack  from an AWS CloudFormation template.
InkedCreate Stack_LI

CloudFormation is one of the services found in the AWS management console. In the service, I can ‘Create Stack’, selecting the ‘WordPress blog’ template.
The details are as follows:
Name: MyWPTestStack
DBPassword: Pa55word
DBRootPassword: Pa55word1
DBUser: AWSQLStudent
Specifying Details

The lab script makes mention here that ‘the same WordPress template contains an input parameter, KeyName, which specifies the EC2 key pair for the Amazon Ec2 instance that is declared in the template. An Amazon key pair has been created for you.’

As I’m only following the lab script, not actually completing the lab through QwikLabs, I don’t have a pre-made template. However, I do have access to creating EC2 instances, alongside the ones I’ve already created, and I already have a key pair.

In the KeyName drop down on the Details page, I select the key pair that I’ve already created.

The automatically filled parameters are kept on their default settings, and no ‘Tags’ or ‘Advanced Options’ settings are changed, so all that is left to do is create the instance.

Stack Review

Monitoring stack creation
The AWS service for CloudFormation monitors the progress of the stack’s creation. Whilst being created, the status will be CREATE_IN_PROGRESS. Once finished, the status notification will show CREATE_COMPLETE.

This slideshow requires JavaScript.



Using the stack
The WordPress installation still need to be completed. This is done by clicking on the outputs tab, and using the hyperlink located on the page.
Outputs WP
Once the installation is complete, the WordPress dashboard appears. From here, customization and blog posts can happen.

This slideshow requires JavaScript.


Deleting the stack.
Deleting the stack involves selecting the stack to be deleted, under ‘Actions’ pressing ‘Delete Stack’, and then confirming the deletion process.
Delete Stack

During the process, the stack status changes to DELETE_IN_PROGRESS. When a stack is deleted, all of the resources associated with the stack will also be deleted.


By following this lab, I’ve manged to learn how to use CloudFormation in creating a stack, and install a WordPress template.  The implementation of stacks appear to be very useful for running applications, though I would be interested in comparing it to the AWS Lambda service.

Introduction to Elastic Load Balancing

Introduction and Aim
The purpose of this lab is to gain an understanding of the Amazon Elastic Load Balancer. QwikLabs describes the Amzon Elastic Load Balancer (ELB) as a ‘service that automatically distributes incoming application traffic across multiple EC2 instances.’ This can increase the fault tolerance in applications as the ELB service responds to incoming traffic with the required load balancing capacity. The ELB service can be provided for within a single availability zone, or throughout many zones. This service can also be used in a VPC.



  • Logging into the Amazon Management Console
  • Creating an Elastic Load Balancer
  • Adding Instances to an Elastic Load Balancer


Logging into the Amazon Management console
When using AWS, I log into the console through my administrator account rather than my root account. This is a security measure as my root account has access to the financial aspect of AWS. If I were intending to use AWS in  a business scheme or for sensitive information, I would have more users, each with access corresponding to the level of security required.
In order to reduce latency, my AWS account is set in the Sydney region. Although not every service is available at the Sydney zone, I’m currently only working with the basics of what AWS can provide, so I haven’t yet come across any availability issues.


Creating an Elastic Load Balancer
ELBs are located within the EC2 service. For this lab, I choose a classic load balancer which I’ve called ‘free-lab-load-balancer’.
Classic LB
The security group assigned to the ELB is a new one called ELB-SG1. The lab script has a preset one, but as the lab script is being used only as a guideline, then I needed to use an existing one or make a new one.
InkedAssign SG (NEW) New SG_LI
The Type is an AWS preset configuration, so I’m keeping it as is.

The next step in the Load Balance launch is the ‘Configure Security Settings’,  in which nothing is changed, so I just move onto the ‘Configure Health Check’ screen. When I did this, a warning screen appeared:
Config Sec Settings Warning
This warning is something to be heeded for future professional use, but not for this lab.
The lab script asks for the following values:
Response Timeout: 2 seconds
Health Check Interval: 6 seconds
Unhealthy Threshold: 2
Healthy Threshold: 2
Config Health Check

The next step is to add EC2 Instances, I chose two arbitrary instances that were displayed in my instance option list.
Adding EC2 Instances

As Tags are not a part of this exercise, I move on to the final step of reviewing all the load balance specifications.
ELB Review
After checking that everything was according to the script, the load balance can be created.


Once the load balance is created, I can click on the ‘Instance’ tab alongside the ‘Description’ tab near the bottom of the screen. ELB has alt-text that is displayed over the ‘i’ picture next to the instances. The alt-text reports on the status of the instances in relation to the load balance.
Instances Within the ELB
In the ‘Description’ tab, the DNS field name contains a hyperlink that when copied into the browser window, directs to the load balance page. QwikLabs states that ‘While it all looks the same on the front end, as you refresh the page, on the back end your requests are being load balanced between your two running instances.’

The DNS link didn’t work for me, and instead just showed a blank screen. Upon further inspection with the Firefox developer tool, the network was reporting an Error 503, which is a back end server problem.
Back End Server Unavailable

I considered that perhaps I had made a mistake during the load balance launch process, so I created another load balance, taking a look at a classmate’s blog for assistance and rigorously looking over the lab script again.

The DNS link result this time was: Server not found. Using the developer tool, I was able to see that it wasn’t the same problem as my previous load balance as my previous load balance, which implied that it wasn’t a back-end server issue anymore.
Network Display for LB 2
DNS Resolution
Unfortunately, I still didn’t know what the problem seemed to be, or why the problem was no longer a back end issue.


This was an interesting lab in the application of a multi-instance service such as the Amazon Elastic Load Balance. I would like to know why the DNS link failed, and I’m not confident that I could determine that on my own. Having a trained person explain the methods and reasons behind the specifications of the launching of an ELB, may be a beneficial method in helping me understand how to correctly implement the ELB service.

Introduction to Amazon DynamoDB

Introduction and Aim
The purpose of this lab is to create a simple table in Amazon DynamoDB, which is used to store information about a music library. QwikLabs describes Amazon DynamoDB as ‘a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale.’



  • Create an Amazon Dynamo DB  table
  • Loading data into an Amazon DynamoDB table
  • Querying Amazon DynamoDB
  • Deleting an Amazon DynamoDB table


Creating a new table
In the DynamoDB, I create a Table called ‘Music’. For the primary key, I write ‘Artist’ and have its type as string. The next step is to click ‘Add Sort Key’ and create a new field type called ‘SongTitle’ that is also a string.
Then Table settings are left as default, and now the database can be created.
InkedCreate Table_LI

The action of creating a DynamoDB is a very simple process on the user’s end, as AWS controls most of the set-up process.


Adding and modifying table data
In DynamoDB, each item is made up of attributes, which is similar to way an entity contains attributes. For DynamoDB, only the primary key attributes are required.
In order to create an item, the specific table is selected, in my case it’s the music table. Then the items tab can be clicked and a ‘Create Item’ option will be displayed.
The lab script asks for the following information to be input into the new item:

Artist: No One You Know (String)
SongTitle: Call Me Today (String)

Then to create another attribute, the ‘Append’ button is used. In this instance, another string type with Field: AlbumTitle, and Value: Somewhat Famous.
Additional Attribute to Create Item

Another attribute is made, this time of type number, with Field: Year, and Value:2015.

Then the item can be saved, now with four attributes.
Personalised Item to Create

The lab script asks for two more items to be created. The final table looks like this:
All Items


Modifying an existing item in the table
The table can be modified by selecting the Music table, and either double-clicking on the cell to edit, or as the lab script suggests; click the items tab and select the year, then in the Actions drop-down, press edit, and save any changes made. The lab script directs for the year ‘2014’ to be changed to ‘2013’.
Editing Date on Music Item


Querying the table.
The table can be queried to find specific items based on various information. The lab script makes mention that ‘the primary key is made of Artist (partition key) and SongTitle (sort key).
In the music table, under the items tab, I can change the drop-down labelled ‘Scan’ to ‘Query’.
The first query requires me input into the Partition…SongTitle…String=value box; ‘No one You know”. Once searched, all tracks with the artist ‘No One You Know’ are displayed.
Query Search -No One You Know
The next query is, keeping with the previous query add another query for Sort Key…SongTitle…String= ‘Call Me Today’.
Query Search-Call Me Today

The final query is, still keeping with the partition key specifications, clear the sort key data, and press the ‘Add filter’ button. In a new filter row, the attribute is set to year of type number, and the value set to 2013. This limits songs from the specified album that have the specified date.
Query Search -Filter Year 2013


Deleting the table
Deleting a table also deletes all the data within the table. To delete the table,I select table to delete, click ‘Actions’, then press ‘Delete Table’. A confirmation pop-up appears, and once confirmed,  the database is deleted.
Delete Table



I found DynamoDB to be a very user-friendly AWS environment due to preset parameters, which resulted in very little configuration adjustments. Although this is suitable for simple tables such as the Music table that was created, I can understand that the reduced amount of configuration can limit the use of the database. A thorough run-through and play within the DynamoDB is a potential solution for gaining understanding of the extents of this service.

Introduction to Amazon Elastic Block Store (EBS)

Introduction and Aim
The purpose of this lab is to gain basic understanding and comprehension of what is the Elastic Block Store (EBS) in AWS.
The Amazon Elastic Block Store is a service that provides block level storage for both EC2 instances and the AWS Cloud. Block level storage volumes are replicated within their availability zone, the redundancy increases security from an AWS server failure, and ensures a constant low latency. EBS also allow for scaling up and down of usage in very short time frames.


  • Create an EBS Volume in the Amazon Management Console
  • Add an EBS Volume to an instance
  • Snapshot an EBS Volume


Creating an Elastic Block Store Volume
EBS Volumes are found within the EC2 service. QwikLabs describes EBS Volumes as ‘hard drives in a computer. The data on them persists  through the lifetime of the volume and can be transported between virtual machines as needed.’

On the side panel of the EC2 display, the EBS contains two options; Volume and Snapshot. When creating a volume, the region to contain the volume is important to consider as the volume is replicated within the region.
In the ‘Create Volume’ dialog box, the settings are:
(a). Type: General Purpose (SSD)
(b). Size (GiB): 1
Availability Zone: Sydney (The region in which my AWS server is set): ap-southeast-2a

Create Volume

The created volume is able to be attached to an instance. I will use the instance that I created for my Introduction to EC2 with Windows server lab.
InkedRunning Instance_LI

Adding an EBS Volume to an instance
If the state of the volume is ‘available’, then a running instance will be able to be attached to it.
Attaching Instance Cropped


Snapshotting an EBS Volume and increasing performance
QwikLabs explains that ‘a snapshot of volume replicates the data in the volume. It also allows you to change the properties of the volume to enable features like provisioned IOPS’.

In the 1GiB volume, I can right click and ‘Force Detach Volume’. The lab script makes mention that the instance would need to be stopped before doing this so as to not force detach the drive. However, in this lab, the instance will remain running as there isn’t anything of importance within it, and the lab is focusing more on what can be done with a volume rather than whether the order of actions on the volume would follow production protocol.
InkedDetaching Volume_LI

Once the volume is detached, I can right click and ‘Create Snapshot’.
I want to ensure that the screenshot dialog box contains the following settings:
(a). Volume field matches created volume
(b). Name Box: qlebslab
(c).Description: ql ebs volume snapshot
Create Snapshot

I can then create the snapshot which will be stored within Snapshot under Elastic Block Store.

Then I can right click the snapshot and ‘Create Volume’. I want the following settings to be set within the volume dialog box:
(a). Type: Provisioned IOPS (SSD)
(b). Size (GiB): 10
(c). IOPS:300
(d). Availability Zone: Sydney: ap-southest-2a

Create Volume through Snapshot

Under Volumes, the new volume is present and contains the same data from the original but is larger in size and has IOPS.
Final Volume


It seems to me that EBS could be a very useful storage scheme for businesses that require the security of the replicated data. However, if I were looking to start my own business, I would want to compare pricing vs storage amount as to whether this form of storage compared to S3 storage is worthwhile from a cost perspective.