Budget Update: 30/04/2017

The billing report in AWS only displays expenditure during the current month. For this month, the three main causes of expense have been the services; RDS, EC2, and KMS.

More detailed billing logs for each service are shown below.

This slideshow requires JavaScript.

When input into my AWS spreadsheet, I get the following results:

SpreadSheet Budget

 Spreadsheet documenting AWS billing services

Spreadsheet Percentage growth budget

 Spreadsheet calculating percentage growth between each budget update

The percentage growth between this budget update and the last one has dropped, which is indicative of me reaching the end of the DinoStore project and hence, using those services less frequently.

RDS Budgeting

As the end of the month has drawn closer, I’ve been receiving emails from AWS in regards to my RDS budget’s expiration.

AWS Expiration Message

When I designed the RDS budget, I designed it with the DinoStore project in mind. As the DinoStore project was due on the 29th of April, I set the budget to also finish at the end of the month.

So, with the RDS budget coming to an end, I decided that it would be a good time to check my monthly expenditure in comparison to the budgeted amount. My first look at the budget list showed the RDS forecast to be lower than what I expected, when also considering the RDS billing amount:

RDS All Regions

I considered that the underestimated RDS budget was likely due to an incorrect parameter that I had given it, so I adjusted it to focus on the availability zones; ap-southeast-2, and us-east-1, which correspond to Sydney and North Virginia.
Edit DinoStore RDS Budget

Once these adjustments had been made, the RDS budget showed a current expenditure of $9.28 USD, which matches the billing amount.
RDS Budget Details

Although, my adjustment were made at the end of the budget’s lifespan, the realization and application of correcting my budget will enable me to create a more accurate budget plan for my next project.
From an accounting perspective, rather than a technical one, the RDS budget allowance was an accurate prediction of expenditure for the DinoStore project. This knowledge may be able to be extrapolated into my next project in order to provide a reliable budget in relation to the project and its progress.

Lab 10: Configuring DNS with Route 53

I chose not to do this lab as it was optional due to it requiring me to register a domain name, which does not exist within the free tier.

This DinoStore project has already caused me quite a bit of expenditure (see latest budget report), and so I’ve decided not to cause any further expenditure for this project.

This project also took me longer than I anticipated, so I ran out of time to complete this one.

Lab 9: Enabling Auto Scale to Handle Spikes and Troughs

The lab script contains an excerpt from the AWS documentation in regards to auto scaling: “Auto Scaling is a web service designed to launch or terminate Amazon EC2 instances automatically based on user-defined policies, schedules, and health checks.”

Auto Scaling is found under the EC2 service.
001 Launch Configurations

The ‘Launch Configurations’ option needs to be selected, then clicking on ‘Create Auto Scaling Group’ followed by ‘Create Launch Configuration’.
The configuration needs to have the following specifications:

002 Choosing Web Server from my AMIs

My AMIs: DinoStoreWebServer
Instance: t2.micro
Name: scale-web
IAM Role:  WebServerRole
Security Group: WebRDPGroup
Key Pair: (I’m using an existing key pair)

003 Review Launch 1 of 2004 Review Launch 2 of 2
In ‘Create Auto Scaling Group’ a group needs to be created with the following specifications:

Group Name: scale-web-asg
Group Size: starting with two instances
Network: Default VPC
Subnet: Default subnets for each Availability Zone

005 Create Auto-Scaling Group

Advanced Details:
Receive traffic from ELB: DinoStoreLB
Health Check Type: ELB
006 Advanced details 9_1_c_v
Configure Scaling Properties:
Group remains at initial size
007 Auto-Scaling Policies
Tags-Name:ASG-WebServer, Tag instances

 

In the EC2 dashboard, the auto scaling information shows the instances launch, as does the instance screen.
The lab script makes mention that the manually created servers would usually be deleted before or after the set up of the auto scaling group. This would be because of their inability to scale.
The load balancer screen also shows the status of the instances in service.
012 AS w Two Instances

Using the load balancer URL, the DinoStore web site can be displayed. Upon refreshing the site, the internal IPs change.

This slideshow requires JavaScript.

For this all of my instances attached to the load balancer were utilized.

Then in the EC2 instance window, the web server and its image (the non auto-scaling instances) need to be terminated, before testing the DinoStore site refresh to check the number of IPs available.

This slideshow requires JavaScript.

The number of different IPs available dropped down to match the number of auto scale IP instances within the load balancer.

In order to test the auto scaling, one of the autoscale instances needs to be terminated, and then the EC2 instance window and the auto scaling groups window need to be checked for a new instance being automatically created.
020 AS Instances Starting with Available Space
After the new instance is in service, the DinoStore page again needs refreshing to check the number of IP addresses. (This has more than one new instance in progress as I was playing around with the settings and set the desired number of instances to five.)

 

Challenges
The challenge I faced with this lab was due to my lack of knowledge that I can only have ten EC2 instances available at any one time. The auto scaling group ended up failing because of this limitation, which I had reached with the rest of my instances.
010 Auto-Scaling -Activity History w One Instance
In order to resolve this, I terminated any unnecessary instances.

Lab 8: Using ELB to Scale Applications.

The AWS documentation on elastic-load balancing states that “Elastic Load Balancing [is a scaled service that] automatically distributes your incoming application traffic across multiple EC2 instances. It detects unhealthy instances and reroutes traffic to healthy instances until the unhealthy instances have been restored. Elastic Load Balancing automatically scales its request handling capacity in response to incoming traffic.” This excerpt is stated at the top of the lab script to provide greater comprehension for action of Elastic-Load Balancing.

Elastic load balancing is found in the EC2 service, under ‘Load Balancers’. The load balancer created has the following specifications:

Load Balancer name: DinoStoreLB
Load Balancer Configuration: Kept as default
Security Group: WebRDPGroup
Ping Protocol: TCP (This influences the Load balancer choice at the start of the creation.)
Ping Port: 80
-Add the two web servers created in prior labs

002 Review ELB003 Review ELB Part 2

The instances are defined as being ‘Out of Service’ until the registration process finishes. Once registered, they are defined as being ‘In Service’.
005 New ELB Instances description

The load balancer public DNS is then copied and paste into a new tab with the website name as its suffix. This supplies the DinoStore site but cycles through the two different IPs belonging to the web server instances.

This slideshow requires JavaScript.

Removing one of the instances results in the load balancer only having the single IP address use, so there is no change to the IP address of the DinoStore site.

This slideshow requires JavaScript.

Once the removed instance is re-instated, the load balancer can now choose between using either IP again.

This slideshow requires JavaScript.

 

Challenges
I didn’t find any challenges or difficulties in this lab. I did however, have to track back on my initial creation of the elastic IP as I had chosen the wrong type.

Not so much a challenge, but more of an interesting aside; I noticed that changes that were installed into the web server’s (IP 172.31.21.72) DinoStore application weren’t immediately changed on my LabSix-WS instance (IP 172.31.16.31). I concluded from this that the image is only as relevant as the time when it was taken. To resolve the discrepancies between my server and its image, I created a new image (IP 172.31.17.235), which was re-instated instead of the removed LabSix-WS image.
The lack of interchange of adjusted information between an instance and its image, does make me wonder how relevant images are for servers that are frequently updated, as a new image then also needs to be made in response to the update.

Lab 7: Using Elastic IPs

The Elastic IP option is found within the EC2 service.

In the Elastic IP window, ‘Allocate New Address’ is clicked, which supplies a new elastic IP.
001 Allocate Address

The new IP can then be selected and a right-click on it reveals the option to ‘Associate Adress’. The address associated with this is the Web Server instance.
002 Associate Address

The new IP address can then be copied and paste into a new browser tab, with /1-Net702.DinoStore/Net702.DinoStore/ added as a suffix. The internal IP for this DinoStore site is noted.
003 Internal IP

The IP address is then re-associated with the web server image. The IP re-association is allowed by clicking the ‘Reassociation’ option box in the IP Association popup window.

This IP is also copied and paste into a browser with the website suffix, and the internal IP noted.
004 Internal IP of AMI

Challenges
I did not have any challenges or troubles with this lab.

Lab 6: Creating and using AMIs

The AMI’s used in this lab are created from pre-existing instances.

In EC2, the web server instance is made into an an image by right-clicking on the instance and choosing ‘Image->Create Image’.
001 Create image in Web Server Instance
For this image, the name is: DinoStoreWebServer, and the image description is: ‘Image of DinoStore website vm.’002 Image Format

The queue server is also made into an image, with it’s name being: DinoQueueServer, and it’s image description: ‘Image of DinoStore queue server vm.’

003 AMI Interface

The AMIs are conatined within the EC2 service.

Once created, the web server image is launched with the following specifications:

Type= t2.micro
(The subnet could be in different availability zone to spread instances around the region. This is potentially a good practice if the finances are available for it, however, in my case there is no need to change the region.)
IAM Role: WebServerRole
Tag (Name): LabSix-WS
Security Group: WebRDPGroup
Key Pair: Existing key pair

004 WS-AMI instance Review 1_2005 WS-AMI Instance Review 2_2

While waiting for the image to initialize, the original web server is opened in the local browser, taking note of the IP address.
WebRDP DNS in Browser

Once ready, the public DNS of the image is copied into a new tab in the browser, with the website name attached to the end of the URL. The IP address of this is also noted.
LabSix WS DNS in Browser

They have different IP addresses.

Challenges
My only challenge with this lab was that I didn’t fully realize my website name. This meant that I was putting in /Net702.DinoStore/ and receiving this error:
007 Server Error On Local Browser

Or trying /1-Net702.DinoStore/ at the end of the DNS and receiving this message in my browser window:
008 WebRDP Server w LabSix DNS in Browser

Eventually I realized that as I was only using an image, I would be able to locate my website details from the original web server instance. When opening the RDP and connecting to DinoStore through IIS, I was able to determine that my website name was /1-Net702.DinoStore/Net702.DinoStore/ due to the folder within the folder when I had copied my DinoStore folder into the wwwroot file.

Lab 5: Adding EC2 Virtual Machines and Deploying the Web App

The lab script explains the initial step in this lab is “to create roles that access other amazon Services so that applications running on EC2 instances don’t have to have credentials baked into the code.”

In the IAM service, a policy can be created using the Policy Generator. This policy has the following settings:

Part 1
Effect: Allow
AWS Service: Amazon DynamoDB
Actions: deleteitem, describetable, getitem, putitem, updateitem
ARN: arn:aws:dynamodb:ap-southeast-2:[ACCOUNT NUMBER]:table/ASP.NET_SessionState

Part 2
Effect: Allow
AWS Service: Amazon SQS
Actions: deletemessage, deletemessagebatch, getqueueurl, receivemessage, sendmessage, sendmessagebatch
ARN: arn:aws:sqs:ap-southeast-2:[ACCOUNT NUMBER]:dinoorders

The policy is then named ‘DynamoSqsPolicy’
001 DynamoSQS Policy Generator

Again in IAM, a new role needs to be created. The role is called ‘WebServerRole’ and it’s AWS service roles are ‘Amazon EC2’, and it contains the customer managed policy of ‘DynamoSqsPolicy’.
002 IAM WebServerRole

Then in the EC2 service, a new instance can be created with the following settings:
Instance: Free tier Microsoft Windows 2012 R2 Base,
Type: General Purpose t2.micro (free tier available)
IAM Role: WebServerRole
Name: Web Server DSL5 18-4
Security Group: Create new security group

Name: WebRDPGroup
Description: Web or RDP access – created for lab web server ec2 instance
Input Protocol:
RDP -Location IP
HTTP -All Sources

003 WebRDP Instance
With this security group, I attached an already created key pair.

Also in the EC2, another instance needs to be created for the queuing server. Again, a free tier t2.micro Windows Server 2012 R2 Base instance is launched.
IAM Role: WebServerRole
Name: Queue Server DSL5 18-4
Security Group: Create new security group

Name: RDPGroup
Description: RDP access – created for lab queue server ec2 instance
Input Protocol:
RDP -Location IP

004 Queue Server Instance
I also used a previously created key pair for this security group.

For the web server instance, the remote desktop file is downloaded and the password decrypted using the key pair. Once connected to the server, IIS (including asp.Net 4.5 with developer files) HTTP connectors, and Windows authentication role services need to be installed.
005 Install IIS

In Visual Studio, the DinoStore needs to be published as file system which can be copied into the web server.
006 Publishing DinoStore Project

In the web server, the published dinostore file is copied into the folder \inetpub\wwwroot.  In the IIS manager the dinostore folder can be converted to an application by selecting the folder and pressing the ‘convert to application’ option.

007 Copying Files to wwwroot in RDP

Moving file into \wwwroot

008 Convert Dinostore File to Application

Converting file into an application

 

 

In order to allow instances in the RDP and WebRDP security groups to access the instances in the RDS security group, the security group created from the RDS is selected and in the inbound tab, two new rules need to be created. Both have type: All traffic, Protocol: All, and Source: Being their respective security group.
009 RDS Sec Group Access to RDPs

Once again in the web server, the Web.config file is opened in Notepad for editing. The DynamoDBSessionStoreProvider keys should be deleted from between their quotations. This also needs to occur for the keys below , then the file can be saved.

If internet explorer is opened in the web server, the link http://169.254.169.254 shows the following information, which are temporary credentials.
010 Temp Credentials from Role

In IIS Manager in the web server, the website needs to be selected from the left panel of the window, and the centre pane changed to ‘Content View’. From there, the ‘default.aspx’ can be right-clicked, and the option to ‘browse’ can be chosen. This leads to the DinoStore home page, of which, the various aspects such as login and buy can be used.
013 Dinostore Home on VM

The public DNS of the Web Server DSL5 needs to be tested on a public internet connection. This is done by copying the DNS into a new browser window on the desktop (rather than the web server itself). and adding on the website name to the end of the URL. In this scenario, both IP addresses, from the server and the browser, will be the same.
016 DinoStore Connection over Public IP

The next step is to setup the order processing app in the queue server. Before the file can be published, it needs to be ‘released’ from the DinoStore solution. This is done by selecting the ‘Net702.DinoStore.OrderProcessor’ from the Solution Explorer, then in the icon bar, placed directly below the Tool tab, is an option window that can be changed from debug to release. Once the solution has been released, it needs to be published before being copied into the Queue server’s cloud desktop.
017 Configuration Manager in VS

The OrderProcessor application needs to run at the server’s startup. This is done by copying the ‘setup’ executable found in the publication and pasting it within C:\ProgramData\Microsoft\Windows\Start Menu\Programs\StartUp. The application can then be run.
020 OP exe into Startup File

In the local server, the AWS DinoStore database needs to be opened in order to determine what orders are present in the order table.

Then in local browser, the cloud website can be opened for the purpose of logging in and purchasing some dinosaurs through the checkout.

While the DinoStore is open, the queue server needs to be ready to quickly access so that the OrderProcessor console can be seen. As the DinoStore purchase is made, there is a ‘Queue messages received: count is 1’ line that shows up on the console, followed by  a ‘Queue message(s) deleted’ line.
024 Polling Queue in QS VM after Order

Finally, the AWS DinoStore database is re-examined to check the new order that has been recorded in the order table.

Challenges
I also faced a few challenges throughout the course of this lab as well.

My first challenge was easily enough solved, but it involved the internet explorer in the web server. When the internet explorer is first accessed in the remote desktop, it has high security settings that make it very hard to do anything in the browser. This problem was solved my checking online on how to reduce the browser’s security.

Another small problem that I had, was that I didn’t know where \inetpub\wwwroot was located. Due to my lack of familiarity with 2012 Windows version, I had trouble with locating it on my own. I solved this by looking at a classmate’s blog for assistance. One of their pictures showed the file path for wwwroot, which enabled me to access it for myself as well.

Another error that I faced, which caused some difficulties was attempting to run my converted file without realizing that I needed to manually convert another portion of the file. The folder that I copied from my local server into the web server contained the DinoStore information within another folder in it. When I converted my main folder to an application, I was unaware that the conversion had not reached folder that contained the DinoStore information. This resulted in the following error screen:
012 Parse Error

I managed to solve this when I was looking through the main folder in the IIS manager. I was attempting to check whether there were any other ‘default.aspx’ files or ‘web.config’ files that perhaps were being accessed instead of the ones that I had adjusted. From a technological perspective, my arrangement and organization of the DinoStore and DinoStore related files were poor, which could be considered as the main factor for this error’s occurrence. The ‘Net702.DinoStore’ folder within the ‘1-Net702.DinoStore’ folder was converted once I realized my mistake, and this solved the configuration error.

Lab 4: Configuring the system to use Simple Queue Service

In SQS service, a new Queue can be created. The queue name is to be called ‘dinoorders’.
001 Create SQS

After sending a message through ‘Queue Actions’, the queue screen can be refreshed to check whether there are messages are available.
002 Send Message

The message can be viewed by right-clicking on the queue and selecting the view/delete message in the pop-up tab.

The messages can then start polling. Once the message has finished polling, the message can be deleted and polling stopped.
003 Polling Messages

 

 

In Visual Studio, in the Net702.DinoStore Checkout.aspx.cs, adjustments to the code need to be made:
Firstly, the lines of code from line 66 to 126 (inclusive) needs to be deleted. The lab script explains that this “code inserts order information directly into the database, and hence doesn’t scale well.”
Secondly, the deleted code needs to be replaced with the following code:
000a Code exceprt 1000b Code excerpt 2
The lab script states that “this code inserts the order information into the queue so the system can handle bursts.”
005 Update VS Checkout_aspx_cs

To ensure that the solution handles the aspx.cs file code properly, the following ‘using’ statements are added to the current set;

using Amazon.SQS;
using Amazon.SQS.Model;
using NewtonSoft.Json;

Still in Visual Studio, under Tools->NuGet Package Manager->Package Manager Console, the Json.Net package needs to be installed. This package “allows the shopping cart to be serialized as a JSON object and added to the queue.” As stated in the lab script.
008 Json_Net package

In the Visual Studio Net702.DinoStore web.config file, AWS account credentials  are added below the line of code: ‘ValidationSettings:UnobtrusiveValidationMode’

<add key=”AWS Region” value=”Your Region Here”/>
<add key=”AWSAccessKey” value=”Your Access Key Here”/>
<add key=”AWSSecretKey” value=”Your Secret Key Here”/>

The updated system can be checked by building and browsing the project and creating an order. The Amazon queue service should have received a message in response to the order creation.

This slideshow requires JavaScript.

The next step is to add an application that is able to pull orders from the queue. This application is a Net702.DinoStore.OrderProcessor.zip file that needs to be added to the the Visual Studio DinoStore project. The lab script describes the order processor application as “console project code [that] polls the message queue for up to ten messages at once, and de-serializes  the JSON object back into a shopping cart objects, deletes the processed messages, and adds the orders to the MySQL cloud database.”

In my case I added the OrderProcessor file to the Visual Studio code, by right clicking on Solution ‘Net702.DinoStore’ under the Solution Explorer window, and clicking Add->Existing Project, then browsing for the OrderProcessor Visual C# project file.

Within the OrderProcessor project in Visual Studio, in the program.cs file, the SQS URL is added into two places; in place of request.QueueURL, and batchRequest.QueueURL.
000d VS Request000c VS DeleteRequest

In the App.config file, the code is adjusted by adding the AWSRegion and access keys at the add key point, and the StoreSqlDB server connection string is changed to the dinostoreinstance URL with the username and password added to the line.
Inked023 Key Snap

Once the code is adjusted, the project is ‘Set as Startup Project’ by right-clicking on the ‘Net702.DinoStore.OrderProcesor’ file under Solution Explorer. The file then is right-clicked again to access the Properties. In the Properties tab, Signing, the ‘Sign the ClickOnce manifests’ needs to be un-ticked.
013 Unticking the ClickOnce manifests

Before running the OrderProcessor application, the MySQL AWS DinoStore order table will have nothing in it.
014 Dinostore Orders Table Empty

By running the application, the order message is pulled from the queue, added to the database, and then deleted. The app doesn’t recognize whether there are available messages or not, so it runs until the program is exited.
019 OP Pop up for Polling

Now in the MySQL AWS DinoStore order table, the order message data should be displayed.
021 MySQL w Order Table Full

 

Challenges
My first challenge that I faced was adding the OrderProcessor file to the Visual Studio code as I am not familiar with Visual Studio. My first attempts were to use the Project tab and ‘add existing item’. This wasn’t successful as the item imported into the code would only be part of the file, and I needed all of file to be contained in the code. My resulting method came from realizing that the Solution explorer appeared to contain multiple projects under the same solution. I determined that I needed to get my OrderProcessor project into this DinoStore solution, so I right-clicked on ‘Solution ‘Net702 DinoStore’ (2 projects) and tried the option of ‘add->existing project’, which was successful.

The second challenge that I encountered was running the OrderProcessor application. The build would continually fail with various code-based error reports that I attempted to fix.
016 2 Errors and 1 Warning in VS
Through attempting to fix the code errors, I was able to reduce the error number from 19 errors to 2 errors. However these two remaining errors involved a variable called ‘ReceiveMessageResult’ that Visual Studio could not determine. I attempted to solve it by trying the variable of ‘ReceiveMessageResponse’, but it also did not work.
By right-clicking on the variable and clicking on to ‘Go To Definition’ option, I was able to see its inception, and the requirements needed to cause it to run.
017 ReceiveMessageResponse Definition
ReceiveMessageResponse is an AWS based variable, but wasn’t able to transfer into the Program.cs file where the other variable was being used.

I was aware that this particular lab had caused some troubles for my classmates, so I messaged them to ask for assistance on how they had managed to solve this problem. Their response surprised me as it had not been in my considerations; the way to solve this issue was not a matter of correcting a mistaken variable, but on having the correct packages for the OrderProcessor project. Where I had been thinking that every problem was purely an issue with the code, I had forgotten to consider that the imported project wouldn’t automatically contain the correct NuGet packages.
022 Packages_Config for Challenges Section
By downloading and installing the correct versions and types of NuGet packages for the OrderProcessor project, the ReceiveMessageResponse variable was able to be recognized, and the build to succeed.

Budget Update: 24/04/2017

My expectation for this week’s AWS bill is that it would be high because I have been using services such as RDS and EC2 for the DinoStore Project.

When I looked online tonight, my bills page showed this number:
Billing Amount
This is an increase of $7.24 since I last checked my budget.

I did receive a message from one of budgets earlier in the week, on the 19th of April. I decided that as I was still in the process of completing labs 4, 5, and 6 of the DinoStore project, that I would write up the blog once I had completed the labs. My intent in this, was to view the overall expenditure for these RDS based labs.

As the previous budget was now set too low, I changed the budget amount from $1 USD (credit) to $5 USD (credit). Because I only adjusted the budget rather than creating a new one, all of the previously logged expenses remain. In the updated budget this put the forecast percentage to 55%. This required me to create alarms for forecasts above 55%.

EC2 Budget updated to 5-dollar

Percentage forecast alarm creation for budget

In order to clarify where the $11.41 of credits have been billed, I looked through the full billing information of each AWS service.

Below are billing snapshots of the three services that caused expense over the last week:

This slideshow requires JavaScript.

I compiled the billing information into my spreadsheet so that I could compare the cost amount for each service.
AWS Spreadsheet
As expected, most of my costs can be seen to be from the RDS service. What I did not initially expect is that most of the cost would be from the ‘$0.017 per RDS db.t2.micro instance hour running MySQL’ specific service which corresponds to the MySQL CE RDS in the North Virginia region. However, this does start to make sense when I consider that although my Sydney based RDS server is free, my backup to the North Virginia region is not. Hence, whenever I’ve had my instances running this past week, both regions have been charged for every hour or partial hour of use, which has caused the high expense from the North Virginia region.

I created another spreadsheet, this time looking at the percentage increase of cost per week (in relation to the week prior). Extrapolating this information from my billing spreadsheet provides me with the same information, but from a different perspective, enabling me further understand the financial investment required for each service and its charged attributes.
AWS Percentage Spreadsheet

 

Considering this from a business financial perspective, the question arises, is having the North Virginia backup worth its expense?
During the restoration of my RDS instance snapshots, I ensure that they are not multi-AZ deployment enabled, as this would cause a charge from the RDS Service Storage attribute.
In terms of financial capability; In my scenario, where I have $100 USD credit for free, and this project is not for long term, the North Virginia expense is not too problematic, even though it could be considered as needless expenditure. Considering expense value leads me to another question, is this North Virginia backup a setting that can be changed within my RDS instances, or is it locked into the instance properties and can’t be adjusted?

It appears that both questions have a fundamentally wrong assumption. When I created the RDS instance quite a few weeks ago, I created a read-replica of one of my instances and placed it in a different availability zone -North Virginia to be precise. My understanding was flawed, in that I thought that my instances were constantly being backed up into the North Virginia instance whenever I restored them from their snapshots. This understanding was backed up with the results that I found within my last budget report. Tonight, after being unable to found anything relating to North Virginia in my two instances, I changed the region on my AWS site, and looked at my RDS service for the North Virginia region. What I found was a running instance.
NV RDS instance

So in answer to my first question: this North Virginia read-replica appears to a needless expense, and a pricey lesson in keeping track of everything in AWS, especially that which is organized into different regions.

As far as whether storing replicas or other services within different regions is worthwhile; that depends entirely on the business goals, financial priorities, and cloud server influence for each individual business.