Connecting to the Stack’s Instance

Now that the stack’s instance is at a lower charge rate, I will try to connect to the t2.micro instance.

My first attempt resolved in the following error:

003 Still not able to access _Error

The first thing that I noticed, was that I didn’t have a public IP attached to the instance.
003 CFTemplate EC2 Instance Info

When I looked into the elastic IP section of the EC2 service, there were none there. As these are important in allowing me to connect to the RDGW instance, I added the JSON script for elastic IPs into the CloudFormer Template that I was using. The reason that they weren’t there, was because I had accidentally left them out during the CloudFormer template stack creation process.

When I ran the CloudFormer stack again, the RDGW instance still didn’t have a public IP, however, the elastic IPs had been created, so it was only a matter of manually associating the IP to the instance.

I then tried to connect again, but the connection still failed, with the same error response showing.

I considered then, that it may be an issue with the security groups attached to the stack’s VPC. My initial response was to adjust the JSON script as set all of the security groups’ ingress IPs to 0.0.0.0/0. This action was taken because I wanted to make everything open as a means of determining whether or not the failure to connect was due to the security groups. My next attempt to connect was still unsuccessful, which determined that it was not a security issue. Because this was ruled out, I replaced the security group IPs back to their original addresses for best practice purposes.

My next consideration was to utilize my other CloudFormer template that only had the single VPC in its design. This was to determine whether there may have been a CloudFormer template construction issue that was resulting in the connection failure. This however, was not the case as the single VPC template also failed to connect.

My final attempt, was to change the WiFi connection that I was using. This is because NMIT has to potential networks, both with different firewall settings, and the connection that I use has been known to not allow a remote connection to occur. This also, was unsuccessful, as was my attempt with my home network.

With all of these potential connection errors having been ruled out, I sought help from my classmates as to the design template of their successful stacks, as this would enable me to compare my stack’s design template and see the difference in design that was causing my connection error. While helping me with providing their design template, one of the classmates suggested that I try create a new CloudFormer template from the Microsoft’s Quick Start, Scenario 3, and create the stack to be completely open to any RDP IP addresses.

I did as he suggested, and during the creation of the initial stacks from that are based from scenario 3, and set the Network Configuration for ‘Allowed Remote Desktop Gateway External Access CIDR’ to 0.0.0.0/0.
005 Specify Details NS AZ_Options

Once the stack results were organized through the CloudFormer, I ran the new template, removing any of the errors in the JSON script that were causing the stack to rollback. Once the stack was complete, I attached an elastic IP to the instance, and attempted to connect to it. The result was successful.
01 ADDS RDP Success

The previous failures were due to a discrepancy from the remote desktop gateway external access CIDR that had been set-up with the creation of the stacks prior to CloudFormer. Once that had been resolved, the connection was available.

Advertisements

Changing Instance Size in Stack Template

The Microsoft Quick Start template provides an enterprise instance of t2.large in its formatting, which costs more than what is needed for this project. This has been adjusted in the CloudFormer template by replacing the “t2.large” with “t2.micro”.

003 Instance size change

The template was able to run with this smaller instance, which is beneficial for me as I now don’t need to be as concerned with the costs involved for the instance, especially as it charges the same, per hour or part hour.

 

 

Budget Update: 08/05/2017

For the past week, ever since the DinoStore project ended, my AWS account hasn’t been used.

As it is a new month, my billing log has also rolled over.
Billing Total
The charge of $0.14 credits are from two services; EC2 and KMS.

The EC2 charge, as seen in the screenshot below, was from the EBS service within EC2. The instances that I had been creating for my DinoStore had automated volumes within the EBS service system.
EC2 only 1

When I had shut down the rest of my DinoStore services, I had forgotten about EBS, which is why I have been charged even though I am no longer working upon the DinoStore project.

 

The KMS charge is a charge for which I am unsure of its origin.
KMS only 1
A large factor in my confusion is that the KMS specifies a region-based charge, but when I trace it back into the IAM service, IAM is not region specific, so I’m left unsure as to how I am supposed to determine, and reduce this charge.

As of current, my guess is that may be due to something occurring each time I log into AWS, but I am still very much unsure.

I can’t seem to find any relating AWS documentation, so my current plan is to check with my classmates to determine whether they have the same charge on their Billing Info page.

 

 

 

 

Lab 6: Creating and using AMIs

The AMI’s used in this lab are created from pre-existing instances.

In EC2, the web server instance is made into an an image by right-clicking on the instance and choosing ‘Image->Create Image’.
001 Create image in Web Server Instance
For this image, the name is: DinoStoreWebServer, and the image description is: ‘Image of DinoStore website vm.’002 Image Format

The queue server is also made into an image, with it’s name being: DinoQueueServer, and it’s image description: ‘Image of DinoStore queue server vm.’

003 AMI Interface

The AMIs are conatined within the EC2 service.

Once created, the web server image is launched with the following specifications:

Type= t2.micro
(The subnet could be in different availability zone to spread instances around the region. This is potentially a good practice if the finances are available for it, however, in my case there is no need to change the region.)
IAM Role: WebServerRole
Tag (Name): LabSix-WS
Security Group: WebRDPGroup
Key Pair: Existing key pair

004 WS-AMI instance Review 1_2005 WS-AMI Instance Review 2_2

While waiting for the image to initialize, the original web server is opened in the local browser, taking note of the IP address.
WebRDP DNS in Browser

Once ready, the public DNS of the image is copied into a new tab in the browser, with the website name attached to the end of the URL. The IP address of this is also noted.
LabSix WS DNS in Browser

They have different IP addresses.

Challenges
My only challenge with this lab was that I didn’t fully realize my website name. This meant that I was putting in /Net702.DinoStore/ and receiving this error:
007 Server Error On Local Browser

Or trying /1-Net702.DinoStore/ at the end of the DNS and receiving this message in my browser window:
008 WebRDP Server w LabSix DNS in Browser

Eventually I realized that as I was only using an image, I would be able to locate my website details from the original web server instance. When opening the RDP and connecting to DinoStore through IIS, I was able to determine that my website name was /1-Net702.DinoStore/Net702.DinoStore/ due to the folder within the folder when I had copied my DinoStore folder into the wwwroot file.

Lab 5: Adding EC2 Virtual Machines and Deploying the Web App

The lab script explains the initial step in this lab is “to create roles that access other amazon Services so that applications running on EC2 instances don’t have to have credentials baked into the code.”

In the IAM service, a policy can be created using the Policy Generator. This policy has the following settings:

Part 1
Effect: Allow
AWS Service: Amazon DynamoDB
Actions: deleteitem, describetable, getitem, putitem, updateitem
ARN: arn:aws:dynamodb:ap-southeast-2:[ACCOUNT NUMBER]:table/ASP.NET_SessionState

Part 2
Effect: Allow
AWS Service: Amazon SQS
Actions: deletemessage, deletemessagebatch, getqueueurl, receivemessage, sendmessage, sendmessagebatch
ARN: arn:aws:sqs:ap-southeast-2:[ACCOUNT NUMBER]:dinoorders

The policy is then named ‘DynamoSqsPolicy’
001 DynamoSQS Policy Generator

Again in IAM, a new role needs to be created. The role is called ‘WebServerRole’ and it’s AWS service roles are ‘Amazon EC2’, and it contains the customer managed policy of ‘DynamoSqsPolicy’.
002 IAM WebServerRole

Then in the EC2 service, a new instance can be created with the following settings:
Instance: Free tier Microsoft Windows 2012 R2 Base,
Type: General Purpose t2.micro (free tier available)
IAM Role: WebServerRole
Name: Web Server DSL5 18-4
Security Group: Create new security group

Name: WebRDPGroup
Description: Web or RDP access – created for lab web server ec2 instance
Input Protocol:
RDP -Location IP
HTTP -All Sources

003 WebRDP Instance
With this security group, I attached an already created key pair.

Also in the EC2, another instance needs to be created for the queuing server. Again, a free tier t2.micro Windows Server 2012 R2 Base instance is launched.
IAM Role: WebServerRole
Name: Queue Server DSL5 18-4
Security Group: Create new security group

Name: RDPGroup
Description: RDP access – created for lab queue server ec2 instance
Input Protocol:
RDP -Location IP

004 Queue Server Instance
I also used a previously created key pair for this security group.

For the web server instance, the remote desktop file is downloaded and the password decrypted using the key pair. Once connected to the server, IIS (including asp.Net 4.5 with developer files) HTTP connectors, and Windows authentication role services need to be installed.
005 Install IIS

In Visual Studio, the DinoStore needs to be published as file system which can be copied into the web server.
006 Publishing DinoStore Project

In the web server, the published dinostore file is copied into the folder \inetpub\wwwroot.  In the IIS manager the dinostore folder can be converted to an application by selecting the folder and pressing the ‘convert to application’ option.

007 Copying Files to wwwroot in RDP

Moving file into \wwwroot

008 Convert Dinostore File to Application

Converting file into an application

 

 

In order to allow instances in the RDP and WebRDP security groups to access the instances in the RDS security group, the security group created from the RDS is selected and in the inbound tab, two new rules need to be created. Both have type: All traffic, Protocol: All, and Source: Being their respective security group.
009 RDS Sec Group Access to RDPs

Once again in the web server, the Web.config file is opened in Notepad for editing. The DynamoDBSessionStoreProvider keys should be deleted from between their quotations. This also needs to occur for the keys below , then the file can be saved.

If internet explorer is opened in the web server, the link http://169.254.169.254 shows the following information, which are temporary credentials.
010 Temp Credentials from Role

In IIS Manager in the web server, the website needs to be selected from the left panel of the window, and the centre pane changed to ‘Content View’. From there, the ‘default.aspx’ can be right-clicked, and the option to ‘browse’ can be chosen. This leads to the DinoStore home page, of which, the various aspects such as login and buy can be used.
013 Dinostore Home on VM

The public DNS of the Web Server DSL5 needs to be tested on a public internet connection. This is done by copying the DNS into a new browser window on the desktop (rather than the web server itself). and adding on the website name to the end of the URL. In this scenario, both IP addresses, from the server and the browser, will be the same.
016 DinoStore Connection over Public IP

The next step is to setup the order processing app in the queue server. Before the file can be published, it needs to be ‘released’ from the DinoStore solution. This is done by selecting the ‘Net702.DinoStore.OrderProcessor’ from the Solution Explorer, then in the icon bar, placed directly below the Tool tab, is an option window that can be changed from debug to release. Once the solution has been released, it needs to be published before being copied into the Queue server’s cloud desktop.
017 Configuration Manager in VS

The OrderProcessor application needs to run at the server’s startup. This is done by copying the ‘setup’ executable found in the publication and pasting it within C:\ProgramData\Microsoft\Windows\Start Menu\Programs\StartUp. The application can then be run.
020 OP exe into Startup File

In the local server, the AWS DinoStore database needs to be opened in order to determine what orders are present in the order table.

Then in local browser, the cloud website can be opened for the purpose of logging in and purchasing some dinosaurs through the checkout.

While the DinoStore is open, the queue server needs to be ready to quickly access so that the OrderProcessor console can be seen. As the DinoStore purchase is made, there is a ‘Queue messages received: count is 1’ line that shows up on the console, followed by  a ‘Queue message(s) deleted’ line.
024 Polling Queue in QS VM after Order

Finally, the AWS DinoStore database is re-examined to check the new order that has been recorded in the order table.

Challenges
I also faced a few challenges throughout the course of this lab as well.

My first challenge was easily enough solved, but it involved the internet explorer in the web server. When the internet explorer is first accessed in the remote desktop, it has high security settings that make it very hard to do anything in the browser. This problem was solved my checking online on how to reduce the browser’s security.

Another small problem that I had, was that I didn’t know where \inetpub\wwwroot was located. Due to my lack of familiarity with 2012 Windows version, I had trouble with locating it on my own. I solved this by looking at a classmate’s blog for assistance. One of their pictures showed the file path for wwwroot, which enabled me to access it for myself as well.

Another error that I faced, which caused some difficulties was attempting to run my converted file without realizing that I needed to manually convert another portion of the file. The folder that I copied from my local server into the web server contained the DinoStore information within another folder in it. When I converted my main folder to an application, I was unaware that the conversion had not reached folder that contained the DinoStore information. This resulted in the following error screen:
012 Parse Error

I managed to solve this when I was looking through the main folder in the IIS manager. I was attempting to check whether there were any other ‘default.aspx’ files or ‘web.config’ files that perhaps were being accessed instead of the ones that I had adjusted. From a technological perspective, my arrangement and organization of the DinoStore and DinoStore related files were poor, which could be considered as the main factor for this error’s occurrence. The ‘Net702.DinoStore’ folder within the ‘1-Net702.DinoStore’ folder was converted once I realized my mistake, and this solved the configuration error.

Lab 2: Using RDS with ASP.NET Applications

Launching the MySQL RDS Instance and related MySQL connection
The first action required to use RDS with ASP.NET applications, is to launch an RDS instance. In this project the specifications for the instance are as follows:

  • MySQL instance
  • Micro instance
  • Multi-AZ deployment (Not eligible on Free-Tier)
  • General purpose storage type
  • 5GB storage
  • DB Instance Identifier: DinoStoreInstance
  • Master username
  • Master password
  • Default VPC
  • Security group: Create new security group
  • Database name: dinostoredb
  • Disabled automatic back-ups

In the RDS security group, I made the input rules to be MySQL with sources for both my home IP address and NMIT’s address. This was to allow me to access it through MySQL at both places.
Inked008 Security Group Inbound Sources_LI

In MySQL workbench, a new connection is created, this time with the RDS instance endpoint as the Hostname. The connection name is AWS Dinostore, with the username and password being the ones created during the instance set-up.
009 MySQL Connection

In the connection, the create-tables script from lab one is used to create the tables, but within the cloud. Doing this also creates another schema called dinostoremembershipdb. The products are then added to the product table by uploading the CSV file that contains the S3 bucket image references for the image field.
010 MySQL import db

Creating a read-replica RDS and its related MySQL connection
The next move is to create a read replica of the dinostoredb instance. This is done by selecting the instance and clicking ‘Create Read Replica’ in the ‘Action’ menu. The instance is identified as “dinostoreinstancereplica”, and keeps the same class but is set in a different availability zone.
014 RR Instance Specs

In MySQL, another connection can be made, this time with the Hostname linking to the replica. The connection name is AWS DinoStore Replica and contains the same  username and password as the AWS DinoStore DB connection.
016 MySQL Replica Test

Configuring Visual Studio
In Visual Studio, in the Web.Config page, the code can be organized to use the cloud database rather than the local one. This is done by editing the connection strings. For both the “DefaultConnection” and “StoreSqlDB” the local host IP is replaced with the main RDS instance’s endpoint. Another connection line is also created by copy-and-paste of one of the above lines, but replacing the name with “StoreSqlDbReplica”, and replacing the hyperlink to be associated to the read replica instance.
018 VS Adding Replica to WebConfig

In the Default.aspx page, in the ConfigurationManager.ConnectionStrings, the “StoreSqlDb” should be changed to “StoreSqlDbReplica” as this organizes the images to be taken from the replica rather than the main database. The lab text makes mention that this reduces the load on the primary database and so leaves more cycles available for writes.

The Visual Studio Code adjustments can be tested by building the code and running it in a browser.
006 Updated Dinostore with png files attached -prep

Inputting data into membership database from browser
As of current, there are no tables in the cloud membership database in the MySQL Workbench.

In the site being hosted on the browser, a new account and user can be created. This should create a cloud membership database table and also create one for the custom table.
020 Web Login Page

In MySQL, in the dinostoremembershipdb tables, there is now a my_asnet_users table that contains basic information about the user.
039 MySQL my_aspnet_user

If the master RDS server is rebooted while the site is still running, a couple of things occur: The website can still be seen as all of this data is held in the replica, however, if I try to make a new account or logout/login, the site crashes as this information is still being sourced from the master RDS.

Challenges
I encountered a few challenges with this lab that stumped me for quite a while, most that were only resolved by asking for help from other classmates.

The first challenge that I encountered arose from my choice to specifically follow the instructions and create my RDS with Multi-AZ deployment. The Multi-AZ deployment enabled instance costs to run, and I, somewhat unaware and unfamiliar with how RDS instances work, left the instance to run overnight. As I stated in this week’s budget blog, my decision cost me (in USD credits), and so I started using snapshots in order to reduce my expenses. This is where the second challenge arose.

The second challenge involved the restoration of my instances from their snapshots, and while this part was successful, I was unable to connect to my restored instances. (Meanwhile, Multi-AZ RDS is charging me now that I’ve restored it.) After a few hours of searching for causes of the problem through Visual Studio, MySQL workbench, the information of my RDS instances, and AWS help guides, I finally managed to determine the source of my inability to connect: The restored RDS instances were automatically connecting to the wrong security group. By modifying the instances to connect with the right security group, I was able to form the connections from the databases in MySQL to my RDS instances. This meant success once more, and a means to progress with the lab. By my understanding,  I had resolved the cost issue of running a non-free tier instance, and how to properly reconnect to the restored instance.

Here’s where my third challenge arose. When I next went to work on my lab, I was again unable to properly connect to my databases. I checked, and made sure that the security groups were correct, I made sure that the security group itself contained the polytech’s IP and my home IP, and I made sure that I had copied the endpoints correctly into the areas required in Visual Studio and MySQL. I had exhausted my understanding of what would be causing the connection problem, and so I tried creating new RDS instances and new MySQL connections to see if there was a problem connecting during the creation of the MySQL connections. As I discovered, there was indeed a problem during the inception of the MySQL connection, but not for a reason I could understand.

It was at this point that I asked for help from a classmate. He asked about the inbound rules for my security group. To my knowledge, I had used my home IP address, but I figured that I may as well give it a try and reinstate it in one of the inbound rules. Unfortunately challenge number four came along at this point.

Challenge number four was less of a challenge, and more of an unfortunate situation that somehow occurred. During the time of seeking advice, my computer that holds my Visual Studio and MySQL programs had a slight problem and I was unable to use it properly. I managed to turn it off and on again, which resolved my utilization problem, but now MySQL had cleared itself of all of my connections. This challenge was easily solved by creating new connections and ensuring that the tabular data was correctly stored and transferred between the connections. This resolution however, could not occur without first resolving the third challenge.

In the DinoStore security group, I reinstated my IP as an allow rule for inbound traffic. As it turned out, my classmate was correct in his line of thinking, and my IP address had changed. Once the security group was updated, I was able to connect to the RDS instances through MySQL again, and so progress on with my lab.

In reviewing the challenges that I faced during the course of this lab, although time consuming, I have found that the end result of these challenges are that they have built up my understanding and confidence in working with RDS and AWS, and helped me to gain a greater idea of the interconnections between AWS, Visual Studio, and MySQL in hosting a website. This troubleshooting should help me with the further issues that I will face during the course of this project.

Introduction to Elastic Load Balancing

Introduction and Aim
The purpose of this lab is to gain an understanding of the Amazon Elastic Load Balancer. QwikLabs describes the Amzon Elastic Load Balancer (ELB) as a ‘service that automatically distributes incoming application traffic across multiple EC2 instances.’ This can increase the fault tolerance in applications as the ELB service responds to incoming traffic with the required load balancing capacity. The ELB service can be provided for within a single availability zone, or throughout many zones. This service can also be used in a VPC.

 

Goals

  • Logging into the Amazon Management Console
  • Creating an Elastic Load Balancer
  • Adding Instances to an Elastic Load Balancer

 

Logging into the Amazon Management console
When using AWS, I log into the console through my administrator account rather than my root account. This is a security measure as my root account has access to the financial aspect of AWS. If I were intending to use AWS in  a business scheme or for sensitive information, I would have more users, each with access corresponding to the level of security required.
In order to reduce latency, my AWS account is set in the Sydney region. Although not every service is available at the Sydney zone, I’m currently only working with the basics of what AWS can provide, so I haven’t yet come across any availability issues.

 

Creating an Elastic Load Balancer
ELBs are located within the EC2 service. For this lab, I choose a classic load balancer which I’ve called ‘free-lab-load-balancer’.
Classic LB
The security group assigned to the ELB is a new one called ELB-SG1. The lab script has a preset one, but as the lab script is being used only as a guideline, then I needed to use an existing one or make a new one.
InkedAssign SG (NEW) New SG_LI
The Type is an AWS preset configuration, so I’m keeping it as is.

The next step in the Load Balance launch is the ‘Configure Security Settings’,  in which nothing is changed, so I just move onto the ‘Configure Health Check’ screen. When I did this, a warning screen appeared:
Config Sec Settings Warning
This warning is something to be heeded for future professional use, but not for this lab.
The lab script asks for the following values:
Response Timeout: 2 seconds
Health Check Interval: 6 seconds
Unhealthy Threshold: 2
Healthy Threshold: 2
Config Health Check

The next step is to add EC2 Instances, I chose two arbitrary instances that were displayed in my instance option list.
Adding EC2 Instances

As Tags are not a part of this exercise, I move on to the final step of reviewing all the load balance specifications.
ELB Review
After checking that everything was according to the script, the load balance can be created.

 

Once the load balance is created, I can click on the ‘Instance’ tab alongside the ‘Description’ tab near the bottom of the screen. ELB has alt-text that is displayed over the ‘i’ picture next to the instances. The alt-text reports on the status of the instances in relation to the load balance.
Instances Within the ELB
In the ‘Description’ tab, the DNS field name contains a hyperlink that when copied into the browser window, directs to the load balance page. QwikLabs states that ‘While it all looks the same on the front end, as you refresh the page, on the back end your requests are being load balanced between your two running instances.’

The DNS link didn’t work for me, and instead just showed a blank screen. Upon further inspection with the Firefox developer tool, the network was reporting an Error 503, which is a back end server problem.
Back End Server Unavailable

I considered that perhaps I had made a mistake during the load balance launch process, so I created another load balance, taking a look at a classmate’s blog for assistance and rigorously looking over the lab script again.

The DNS link result this time was: Server not found. Using the developer tool, I was able to see that it wasn’t the same problem as my previous load balance as my previous load balance, which implied that it wasn’t a back-end server issue anymore.
Network Display for LB 2
DNS Resolution
Unfortunately, I still didn’t know what the problem seemed to be, or why the problem was no longer a back end issue.

 

Conclusion
This was an interesting lab in the application of a multi-instance service such as the Amazon Elastic Load Balance. I would like to know why the DNS link failed, and I’m not confident that I could determine that on my own. Having a trained person explain the methods and reasons behind the specifications of the launching of an ELB, may be a beneficial method in helping me understand how to correctly implement the ELB service.