Final Budget Report for QwikLABS

Having now completed the QwikLabs assignment, I’ve considered it beneficial to determine where all my expenses occurred.

Budget Spreadsheet

QwikLabs expenditure

The object of foremost concern, is that my largest expenditure is from an attribute in which I cannot confidently associate the cost.

However, If I look over the previous budget logs I can see that the same service has been billed prior. This means that the large expenditure seen here for the KMS is the culmination of accumulated fees.

This appeases my concerns, but is still an area that I will keep in mind. Hence, I’ve created another budget, this one focusing solely on the KMS service. When the forecasted amount exceeds $1.00, so is doubled from now, I will receive a notification informing me of this. This should assist with any short-term future activities that may involve the KMS service.

Budget plan KMS

KMS Budget Plan

Introduction to Amazon Relational Database Service (RDS)-Linux

Introduction and Aim
The purpose of this lab is to create and use an Amazon Relational Database service through AWS. Amazon RDS is a cloud based service that deals with databases. Databases can be created, operated, and scaled within the Amazon RDS, which has the ability to make MySQL, PostGRE, Oracle, and SQL Server databases.

 

Goals

  • Create an Amazon Relational Database Service (RDS) instance
  • Connect to the RDS instance with client software

 

Creating a Relational Database Service (RDS) instance
RDS is a service of its own within the Amazon Management Console, rather than being one created through EC2. The lab script requires the MySQL database to have many specific features, These are as follows:

  1.          Specify DB Details
    InkedMySQL database option_LI
  2. DB Instance Class: db.t1.micro (The free tier one)
    DB Instance Class db.t1.micro
    AWS makes mention that this DB instance class is a ‘previous generation instance, that may have lower performance and higher cost than the newer generation.’ Because of this, I looked into the db.t2.micro, which is the current generation instance. The current instance has higher memory and network performance while still being on the free tier, so I will be using the current generation instance in this lab.
    DB Instance Class db.t2.micro
  3. Multi-AZ Deployment: No
  4. Storage Type: General Purpose (SSD)
  5. Allocated Storage: 5
  6. DB Instance Identifier: RDSLab
  7. Master Username: AWSMaster
  8. Master Password: AWS12345
  9. Confirm Password: AWS12345
    Specify DB Details Full
  10.          Next Step->Configure Advanced Settings
  11. Publicly Accessible: No
  12. VPC Security Group(s): Choose a security group containing the text qls. I’m using a security group that I’ve created as I don’t have access to the QwikLabs one.
  13. Database Name: RDSLab
    Configure Advanced Settings
  14. Backup Retention period: 0 days (to disable automatic backups)
    CAS Part 2
  15.           Launch DB Instance

Now that the database instance has been launched, it is important to double check the security groups of the selected VPC and make sure that the inbound rules contain: Type-MySQL/Aurora (3306) with Source-0.0.0.0/0.
Editing Inbound Rules of SG

 

Create an Amazon Linux instance from an Amazon Machine Image (AMI)
Under the EC2 Launch Instance, the Amazon Linux AMI is selected. The instance type is kept as default, which is t2.micro. The next steps, ‘Configure Instance Details’, and ‘Add Storage’, are kept with their default settings. In the ‘Tag Instance’ step, the value given for the name attribute is RDS Free Lab. The final step is to review and launch.
RDS Free Lab Instance

Connecting to Amazon EC2 instance via SSH
Once the instance is launched, the PuTTY Secure Shell client is used to connect to the server. This involves using the instance’s public DNS value into the PuTTY Host name box, prefixed by <ec2-user@>. In the category list, under the SSH option, the Auth option can be clicked which will provide a ‘Private key file for authentication’ box. This is where I use my private key that I’ve previously created.

Connecting to the RDS instance
Within the terminal that opens up, the command ‘sudo yum install mysql’ is typed in, and the install agreement is accepted.
Install mysql

Once installed connect to MySQL, the following text is typed in, with the endpoint name of the RDS instance.
‘mysql –host cjcfraykqpwn.rds.ap-southeast-2.amazonaws.com –password –user AWSMaster’.
This prompts for the AWS12345 password that was created earlier.
InkedEnter mySQL_LI
The darker text at the top is where I accidentally typed the command incorrectly.

The MySQL is now logged into, and the mysql> prompt is visible. The ‘show database’ command can be entered in order to check whether any records return.
mySQL Show DatabasesThe returned output shows that the RDS instance has been connected to successfully.

 

Conclusion
I found this to be an interesting lab with using bash to install MySQL and connect to the RDS. Prior to attempting the Linux RDS lab, I had attempted to complete the Windows RDS Lab. I’m curious to find out whether the Windows’ VM command tool would be as successful in connecting to the RDS.

Introduction to AWS Lambda

Introduction and Aim
The purpose of this lab is to gain a basic understanding of AWS Lambda through creating and ‘deploying a lambda function in an event driven environment’. -As stated in the QwikLabs lab script.
The labscript states that ‘Lambda is a compute service that runs code in response to events and automatically manages the compute resources, making it easy to build applications that respond quickly to new information.’ Lambda is serverless.

 

Goals

  • Create an AWS Lambda S3 event function
  • Configure an Amazon S3 bucket
  • Upload a file to an Amazon S3 bucket
  • Monitor AWS Lambda S3 functions through Amazon CloudWatch

 

Configure an Amazon S3 bucket as the Lambda event source
The first step in configuring an Amazon S3 is to determine the region the lab is running in. In my case, it’s Sydney. Under S3 services, the bucket I’ve created is called ql-lambda, and is set in my current region.

Create an S3 function
On the AWS console, Lambda is located in the Services. In the Lambda console, the ‘Get Started Now’ button is pushed, followed by the ‘New Function’ button.
Lambda The QwikLab instructions for creating the function are as follows:

Select Bluprint: S3-get-object
Configure Triggers: Set  bucket name to bucket that has just been created
Set Event Type to ‘Object Created (All)’
Enable Checkbox: Enable Trigger
InkedConfigure Triggers_LI

–> Next
Configure Function:
Name: S3Function
Description: S3 Function for Lambda
Runtime: Node.js
-There were two available .js nodes, so I chose Node.js 4.3
Configure part 1Handler: Leave as index.handler
Role: Choose an existing role
Existing Role: lambda-role
-As I’m not doing this through QwikLabs, there wasn’t an existing role called lambda-role. Instead, I  created a new role called lambda role. The lambda role contained two policies; Simple Microservice Permission, and S3 Object Read-Only Permission. I chose those two policies as they seemed to best fit the role required for this lab.
Configure part 2 (handler and role)
–> Advanced Settings
Memory (MB): 128
Timeout (s): 5

The final section involves the Review section, and then the function can be created.
Review

 

Upload a file to an Amazon S3 bucket to trigger a Lambda event.
The next step is to upload a file to the S3 bucket in order to trigger a call to the Lambda function.
The file uploaded to the bucket, for the purpose of this test, contains only lowercase lettering with no spaces.
Bucket with Upload

In the Lambda functions page, the function itself can be clicked and then the ‘Monitoring’ tab can be opened. This will provide four graphs: Invocation count, Invocation duration, Invocation errors, and Throttled invocations.
Monitoring in Lambda
Below is a screenshot of the QwikLabs script, which explains what each graph measures.
Graph Explanation

All of this information can be viewed in CloudWatch. This can be accessed by clicking on the ‘View logs in CloudWatch’ button, which is located above the graphs. In the logs section of CloudWatch, the first log stream contains information on ‘Start Request’, ‘End Request’, and ‘Report Request’ of the associated lambda event.
cw log info

 

Conclusion
The information recorded when a Lambda event is triggered appears to be very informative for an overview of financial transactions. This sort of service implemented into a business may help keep track of business expenditure from staff.

Introduction to AWS CloudFormation

Introduction and Aim
The purpose of this lab is to use an Amazon EC2 instance and install WordPress with a local MySQL database. QwikLabs states that AWS CloudFormation ‘gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.’

 

Goals

  • Create a stack using an AWS CloudFormation template
  • Monitor the progress of the stack creation
  • Use the stack resources
  • Clean up when the stack is no longer required

 

Create a stack
In this section, I create a stack  from an AWS CloudFormation template.
InkedCreate Stack_LI

CloudFormation is one of the services found in the AWS management console. In the service, I can ‘Create Stack’, selecting the ‘WordPress blog’ template.
The details are as follows:
Name: MyWPTestStack
DBPassword: Pa55word
DBRootPassword: Pa55word1
DBUser: AWSQLStudent
Specifying Details

The lab script makes mention here that ‘the same WordPress template contains an input parameter, KeyName, which specifies the EC2 key pair for the Amazon Ec2 instance that is declared in the template. An Amazon key pair has been created for you.’

As I’m only following the lab script, not actually completing the lab through QwikLabs, I don’t have a pre-made template. However, I do have access to creating EC2 instances, alongside the ones I’ve already created, and I already have a key pair.

In the KeyName drop down on the Details page, I select the key pair that I’ve already created.

The automatically filled parameters are kept on their default settings, and no ‘Tags’ or ‘Advanced Options’ settings are changed, so all that is left to do is create the instance.

Stack Review

Monitoring stack creation
The AWS service for CloudFormation monitors the progress of the stack’s creation. Whilst being created, the status will be CREATE_IN_PROGRESS. Once finished, the status notification will show CREATE_COMPLETE.

This slideshow requires JavaScript.

 

 

Using the stack
The WordPress installation still need to be completed. This is done by clicking on the outputs tab, and using the hyperlink located on the page.
Outputs WP
Once the installation is complete, the WordPress dashboard appears. From here, customization and blog posts can happen.

This slideshow requires JavaScript.

 

Deleting the stack.
Deleting the stack involves selecting the stack to be deleted, under ‘Actions’ pressing ‘Delete Stack’, and then confirming the deletion process.
Delete Stack

During the process, the stack status changes to DELETE_IN_PROGRESS. When a stack is deleted, all of the resources associated with the stack will also be deleted.
DELETE_IN_PROGRESS

 

Conclusion
By following this lab, I’ve manged to learn how to use CloudFormation in creating a stack, and install a WordPress template.  The implementation of stacks appear to be very useful for running applications, though I would be interested in comparing it to the AWS Lambda service.

Introduction to Elastic Load Balancing

Introduction and Aim
The purpose of this lab is to gain an understanding of the Amazon Elastic Load Balancer. QwikLabs describes the Amzon Elastic Load Balancer (ELB) as a ‘service that automatically distributes incoming application traffic across multiple EC2 instances.’ This can increase the fault tolerance in applications as the ELB service responds to incoming traffic with the required load balancing capacity. The ELB service can be provided for within a single availability zone, or throughout many zones. This service can also be used in a VPC.

 

Goals

  • Logging into the Amazon Management Console
  • Creating an Elastic Load Balancer
  • Adding Instances to an Elastic Load Balancer

 

Logging into the Amazon Management console
When using AWS, I log into the console through my administrator account rather than my root account. This is a security measure as my root account has access to the financial aspect of AWS. If I were intending to use AWS in  a business scheme or for sensitive information, I would have more users, each with access corresponding to the level of security required.
In order to reduce latency, my AWS account is set in the Sydney region. Although not every service is available at the Sydney zone, I’m currently only working with the basics of what AWS can provide, so I haven’t yet come across any availability issues.

 

Creating an Elastic Load Balancer
ELBs are located within the EC2 service. For this lab, I choose a classic load balancer which I’ve called ‘free-lab-load-balancer’.
Classic LB
The security group assigned to the ELB is a new one called ELB-SG1. The lab script has a preset one, but as the lab script is being used only as a guideline, then I needed to use an existing one or make a new one.
InkedAssign SG (NEW) New SG_LI
The Type is an AWS preset configuration, so I’m keeping it as is.

The next step in the Load Balance launch is the ‘Configure Security Settings’,  in which nothing is changed, so I just move onto the ‘Configure Health Check’ screen. When I did this, a warning screen appeared:
Config Sec Settings Warning
This warning is something to be heeded for future professional use, but not for this lab.
The lab script asks for the following values:
Response Timeout: 2 seconds
Health Check Interval: 6 seconds
Unhealthy Threshold: 2
Healthy Threshold: 2
Config Health Check

The next step is to add EC2 Instances, I chose two arbitrary instances that were displayed in my instance option list.
Adding EC2 Instances

As Tags are not a part of this exercise, I move on to the final step of reviewing all the load balance specifications.
ELB Review
After checking that everything was according to the script, the load balance can be created.

 

Once the load balance is created, I can click on the ‘Instance’ tab alongside the ‘Description’ tab near the bottom of the screen. ELB has alt-text that is displayed over the ‘i’ picture next to the instances. The alt-text reports on the status of the instances in relation to the load balance.
Instances Within the ELB
In the ‘Description’ tab, the DNS field name contains a hyperlink that when copied into the browser window, directs to the load balance page. QwikLabs states that ‘While it all looks the same on the front end, as you refresh the page, on the back end your requests are being load balanced between your two running instances.’

The DNS link didn’t work for me, and instead just showed a blank screen. Upon further inspection with the Firefox developer tool, the network was reporting an Error 503, which is a back end server problem.
Back End Server Unavailable

I considered that perhaps I had made a mistake during the load balance launch process, so I created another load balance, taking a look at a classmate’s blog for assistance and rigorously looking over the lab script again.

The DNS link result this time was: Server not found. Using the developer tool, I was able to see that it wasn’t the same problem as my previous load balance as my previous load balance, which implied that it wasn’t a back-end server issue anymore.
Network Display for LB 2
DNS Resolution
Unfortunately, I still didn’t know what the problem seemed to be, or why the problem was no longer a back end issue.

 

Conclusion
This was an interesting lab in the application of a multi-instance service such as the Amazon Elastic Load Balance. I would like to know why the DNS link failed, and I’m not confident that I could determine that on my own. Having a trained person explain the methods and reasons behind the specifications of the launching of an ELB, may be a beneficial method in helping me understand how to correctly implement the ELB service.

Introduction to Amazon DynamoDB

Introduction and Aim
The purpose of this lab is to create a simple table in Amazon DynamoDB, which is used to store information about a music library. QwikLabs describes Amazon DynamoDB as ‘a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale.’

 

Goals

  • Create an Amazon Dynamo DB  table
  • Loading data into an Amazon DynamoDB table
  • Querying Amazon DynamoDB
  • Deleting an Amazon DynamoDB table

 

Creating a new table
In the DynamoDB, I create a Table called ‘Music’. For the primary key, I write ‘Artist’ and have its type as string. The next step is to click ‘Add Sort Key’ and create a new field type called ‘SongTitle’ that is also a string.
Then Table settings are left as default, and now the database can be created.
InkedCreate Table_LI

The action of creating a DynamoDB is a very simple process on the user’s end, as AWS controls most of the set-up process.

 

Adding and modifying table data
In DynamoDB, each item is made up of attributes, which is similar to way an entity contains attributes. For DynamoDB, only the primary key attributes are required.
In order to create an item, the specific table is selected, in my case it’s the music table. Then the items tab can be clicked and a ‘Create Item’ option will be displayed.
The lab script asks for the following information to be input into the new item:

Artist: No One You Know (String)
SongTitle: Call Me Today (String)

Then to create another attribute, the ‘Append’ button is used. In this instance, another string type with Field: AlbumTitle, and Value: Somewhat Famous.
Additional Attribute to Create Item

Another attribute is made, this time of type number, with Field: Year, and Value:2015.

Then the item can be saved, now with four attributes.
Personalised Item to Create

The lab script asks for two more items to be created. The final table looks like this:
All Items

 

Modifying an existing item in the table
The table can be modified by selecting the Music table, and either double-clicking on the cell to edit, or as the lab script suggests; click the items tab and select the year, then in the Actions drop-down, press edit, and save any changes made. The lab script directs for the year ‘2014’ to be changed to ‘2013’.
Editing Date on Music Item

 

Querying the table.
The table can be queried to find specific items based on various information. The lab script makes mention that ‘the primary key is made of Artist (partition key) and SongTitle (sort key).
In the music table, under the items tab, I can change the drop-down labelled ‘Scan’ to ‘Query’.
The first query requires me input into the Partition…SongTitle…String=value box; ‘No one You know”. Once searched, all tracks with the artist ‘No One You Know’ are displayed.
Query Search -No One You Know
The next query is, keeping with the previous query add another query for Sort Key…SongTitle…String= ‘Call Me Today’.
Query Search-Call Me Today

The final query is, still keeping with the partition key specifications, clear the sort key data, and press the ‘Add filter’ button. In a new filter row, the attribute is set to year of type number, and the value set to 2013. This limits songs from the specified album that have the specified date.
Query Search -Filter Year 2013

 

Deleting the table
Deleting a table also deletes all the data within the table. To delete the table,I select table to delete, click ‘Actions’, then press ‘Delete Table’. A confirmation pop-up appears, and once confirmed,  the database is deleted.
Delete Table

 

 

Conclusion
I found DynamoDB to be a very user-friendly AWS environment due to preset parameters, which resulted in very little configuration adjustments. Although this is suitable for simple tables such as the Music table that was created, I can understand that the reduced amount of configuration can limit the use of the database. A thorough run-through and play within the DynamoDB is a potential solution for gaining understanding of the extents of this service.

Introduction to Amazon Elastic Block Store (EBS)

Introduction and Aim
The purpose of this lab is to gain basic understanding and comprehension of what is the Elastic Block Store (EBS) in AWS.
The Amazon Elastic Block Store is a service that provides block level storage for both EC2 instances and the AWS Cloud. Block level storage volumes are replicated within their availability zone, the redundancy increases security from an AWS server failure, and ensures a constant low latency. EBS also allow for scaling up and down of usage in very short time frames.

Goals

  • Create an EBS Volume in the Amazon Management Console
  • Add an EBS Volume to an instance
  • Snapshot an EBS Volume

 

Creating an Elastic Block Store Volume
EBS Volumes are found within the EC2 service. QwikLabs describes EBS Volumes as ‘hard drives in a computer. The data on them persists  through the lifetime of the volume and can be transported between virtual machines as needed.’

On the side panel of the EC2 display, the EBS contains two options; Volume and Snapshot. When creating a volume, the region to contain the volume is important to consider as the volume is replicated within the region.
In the ‘Create Volume’ dialog box, the settings are:
(a). Type: General Purpose (SSD)
(b). Size (GiB): 1
Availability Zone: Sydney (The region in which my AWS server is set): ap-southeast-2a

Create Volume

The created volume is able to be attached to an instance. I will use the instance that I created for my Introduction to EC2 with Windows server lab.
InkedRunning Instance_LI

Adding an EBS Volume to an instance
If the state of the volume is ‘available’, then a running instance will be able to be attached to it.
Attaching Instance Cropped

 

Snapshotting an EBS Volume and increasing performance
QwikLabs explains that ‘a snapshot of volume replicates the data in the volume. It also allows you to change the properties of the volume to enable features like provisioned IOPS’.

In the 1GiB volume, I can right click and ‘Force Detach Volume’. The lab script makes mention that the instance would need to be stopped before doing this so as to not force detach the drive. However, in this lab, the instance will remain running as there isn’t anything of importance within it, and the lab is focusing more on what can be done with a volume rather than whether the order of actions on the volume would follow production protocol.
InkedDetaching Volume_LI

Once the volume is detached, I can right click and ‘Create Snapshot’.
I want to ensure that the screenshot dialog box contains the following settings:
(a). Volume field matches created volume
(b). Name Box: qlebslab
(c).Description: ql ebs volume snapshot
Create Snapshot

I can then create the snapshot which will be stored within Snapshot under Elastic Block Store.
Snapshot

Then I can right click the snapshot and ‘Create Volume’. I want the following settings to be set within the volume dialog box:
(a). Type: Provisioned IOPS (SSD)
(b). Size (GiB): 10
(c). IOPS:300
(d). Availability Zone: Sydney: ap-southest-2a

Create Volume through Snapshot

Under Volumes, the new volume is present and contains the same data from the original but is larger in size and has IOPS.
Final Volume

 

Conclusion
It seems to me that EBS could be a very useful storage scheme for businesses that require the security of the replicated data. However, if I were looking to start my own business, I would want to compare pricing vs storage amount as to whether this form of storage compared to S3 storage is worthwhile from a cost perspective.

Introduction to Amazon Elastic Compute Cloud (EC2) with Windows Server

Introduction and Aim
The purpose of this QwikLabs session is to run a Windows server through an Amazon EC2 instance.
For more information on EC2, check out my blog ‘Introduction to Amazon Elastic Compute Cloud (EC2)‘.

Goals

  • Logging into the Amazon Management Console
  • Creating a Windows Server instance from an Amazon Machine Image (AMI)
  • Finding the instance in the Amazon Management Console
  • Logging into the instance

 

Logging into the Amazon Management Console
When logging into the Amazon services, I ensure that I am logging in through the https://console.aws.amazon.com website as this provides the access to my administration account but not my root account. This is healthy practice as a security measure and as a business technique. The next step is to check my region as not all AWS services are available in every zone. My zone is set to Sydney which is an optimal region for what this lab involves. As Sydney is the closest region to where I live, the latency is reduced, while still providing the resources that I require.

 

Create an Amazon EC2 instance running Windows server
The Windows server that will be run on the instance is Windows Server 2012 R2 Base, which is available on the free tier so I have no qualms about choosing it.

The next move is to run through the configuration steps:
>Configure Instance Details: Everything is kept as default.
>Add Storage: Everything is kept as default
>Tag Instance: A name is created for the tag to assist in easy identification
>Configure Security Group: Leave setting as ‘Create a new security group’ that has a rule for port 3389 open, which is RDP, remote desktop protocol.
>Review Instance Launch: This is a summary of the configuration choices
Review Instance Launch

The final step is choose or create a key pair, in which I choose my existing key pair. Once the instance has been launched, it is a matter of waiting until the instance state shows ‘running’.

 

Connect to your Amazon EC2 instance
In order to connect to the instance, I need an RDP client. I am able to obtain an RDP when I connect to the instance as I’m using a Windows computer already.
Connect to Instance Popup Window Ws

Once the RDP is downloaded, I can get a password which will be used for the Windows instance. The password is first acquired by me providing my private key, which grants me access to the encrypted password. The decrypted form of the password is used in the Windows instance.
InkedCon2Inst Get Password Ws_LI
(The above screenshot has the encrypted password and Key Name whited-out for security reasons).
Now that an RDP program is available and the password has been determined, I can complete the Windows server launch. The RDP is automatically connected to the server so all that is required is the password input. The result is as expected, a Windows 2012 instance is launched, and appears as follows in the slideshow below.

This slideshow requires JavaScript.

 

Conclusion
The Amazon EC2 is proficient at both Windows servers and Linux servers (which were used in the previous lab). It is interesting to me, that the Windows layout is far more application oriented compared to the command line that was the Linux server. This may be due to the different setups of the operating systems, and is something that I could potentially look into further.

QwikLabs: Introduction to AWS Identity and Access Management (IAM)

Introduction
AWS facilitates security and user control over accounts through IAM. In a business environment, there would be different restrictions on certain accounts pertaining to their respective level of clearance within the business system. This QwikLabs course aims to provide a basic understanding of how to manage and utilize the IAM system for various accounts.

Topics covered in this Lab

  • Exploring pre-created IAM users and groups
  • Inspecting IAM policies as applied to the pre-created groups
  • Following a real-world scenario, adding users to groups with specific capabilities enabled
  • Updating passwords for users
  • Locating and using the IAM sign-in URL
  • Experimenting with the effects of policies on service access

Exploring Users and Groups

AWS Management Console –> Services –> IAM

In the lab, there are already three users set up: “userone”, “usertwo”, and “userthree”. Since the lab is being used as a guideline, rather than practical use, I need to set up three new users on my own AWS account in order to follow through with the instructions given in QwikLabs.

Username Password

This is where the lab script and my practical application of it, start to differ. Rather than add groups to the already created users, I will add the user to one of the script suggested groups as part of the user set-up.

UserUno: Group  = “EC2Support”, Attached Policies = “SupportUser”
UserDos: Group = “EC2Admin”, Attached Policies = “SystemAdministrator”
UserTres: Group = “S3Admin”, Attached Policies = “DatabaseAdministrator”

Determining what policies to attach to these groups was harder than I first anticipated. Having never looked around at all the different policies available before, I was slightly overwhelmed. However, I looked around at my choices and realized that I could define the polices by their job functions. From this point onward I sought help from the AWS user guide and from the code supplied under each policy choice.

The AWS policy user guide can be found here: docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_job-functions.html?icmpid=docs_iam_console

Setting Passwords
Once the three new users had been created and associated with a group, I logged into each user and was forced to create a new password. The new passwords had to follow certain specifications that I had arranged prior through my Administrator account.

Password Settings

Experimentation of Policies on Access
This part involves the application of my accounts to determine whether I have grouped them with the right access restrictions and permissions.

  • UserUno: User Uno is under the group labelled “EC2 Support”. As such, I organized with the job policy of “SupportUser”, which the AWS Policy User Guide records as ‘This policy grants permission to create and update AWS support cases.’ The user in this case can ‘contact AWS support, create support cases, and view the existing cases.’

With this is mind, I gave UserUno three tasks: View other users, access EC2, and access S3 buckets.

Access other users:
InkedUno User Permission_LIResult: UserUno has permission to see other users.

Access EC2
InkedUno EC2_LIResult: Uno can access EC2

Access S3 bucket
InkedS3 uno_LIResult: Uno cannot access the bucket

Conclusion: The job policy of UserSupport for Uno has many of the required authorizations, but is not quite correct as it shouldn’t be able to access EC2.

  • User Dos: User Dos is labelled under the group “EC2Admin”, and has the job policy of “SystemAdministrator”. The AWS Policy Guide states that ‘This user sets up and maintains resources for development operations.’ To gain a more comprehensive understanding of this user’s role, I gave it the three tasks I had given User Uno.

Access other Users:InkedDos user permission_LI
Result: Dos does not have permission to manipulate the other users

Access EC2:
InkedInstance Dos_LI
Result: Dos is able to access EC2

Access S3 bucket:
Inkeds3 dos_LI
Result: Dos is able to access S3 buckets

Conclusion: Again Dos’ job policy incorporates most of the features that I desired. However Dos should only be able to access EC2 as EC2Admin, but not S3.

  • User Tres: User Tres is labelled under the group “S3Admin” and has been assigned the job policy of “DatabaseAdministrator”. The AWS Policy Guide states that ‘This user sets up, configures, and maintains databases in the AWS cloud.’ Below are the results of the three tasks.

Access other users
InkedTres User Permission_LI
Result: Tres is unable to access other users

Access EC2:
InkedInstance Tres_LI
Result: Tres is unauthorized to access EC2

Access S3 bucket:
InkedS3 tres_LI
Result: Tres is able to access the S3 bucket

Conclusion: The job policy applied to Tres is exactly what I want for a user that is an S3Admin only.

Final Remarks
Although this practical digressed from the QwikLab script due lack of pre-created users, the application still provided me with plenty of information into how to create and define users within AWS. The job policies that I had associated with users Uno and Dos weren’t quite what I was after. Perhaps, the minor discrepancies of the chosen job policies could be resolved by using some inline policies, which, as the AWS guides defines, are policies inherent or unique to a user.