Migrating Systems to Amazon Cloud: Tips, Tricks and Techniques
1. Background
For the last several months, I was involved in designing, planning and setting up a few large cloud based infrastructure for transactions processing and real time content delivery. All these setups had a quite complex software architecture involving several dozen of inter-connected server instances, numerous commercial and private AMIs in each case. Now that the projects have been nearly completed and the migration is all complete, I can summarize some practical lessons learned for others to benefit.
AWS is the most popular cloud infrastructure. It takes just a few minutes to sign a new account and start using it. However, it is not cheap compared to dedicated servers, and definitely much more complex to manage and administer then a bunch of dedicated servers. There is a lot to learn if this is your first time working with a large cloud set up.
Why is then AWS popular with developers, businesses and the application builders? The main reason for its popularity is that it provides a single point solution to everything from object storage (S3), elastically provisioned servers (EC2), databases as a service (RDS), payment processing (DevPay), virtualized networking (VPC and AWS Direct Connect), Content delivery networks (CDN), monitoring (CloudWatch), Queueing (SQS), and a whole lot more.
Second reason that I can recommend AWS is that the AWS documentation is actually quite good. So, if you are lost, you can figure some ways to do things.
In this post, I’ll be going over some useful tips, tricks, and general advice for getting started with Amazon Web Services (AWS).
2. First Thing First: Understanding AWS Billing and Managing Your Cost
Before you begin to think of migrating your current data center, understand the cost implications. All I can say is be ready for real surprises.
If you have dedicated servers and pay a monthly fee per server, you know what your cost will be at the end of the month. AWS pricing is like eating in a restaurant where you are generally told the range of prices, but you will be billed for every activity inside the restaurant such as minutes you spend there, the time it takes to cook the dishes you order, and the number of times you call the waiter etc. Hope, you get the point.
AWS billing is invoiced at the end of the month and AWS services are generally provided on a “per use” basis. For example EC2 servers are quoted in $/hour. If you spin up a server for 10 hours then turn it off you’ll only be billed for those 10 hours.
Unfortunately, AWS does not provide a way to cap your monthly expenses. If you accidentally start up too many instances of servers and forget to turn them off, then you could get a big shock at the end of the month. If you are a small business on shoestring budget and your IT person accidentally runs a big bill, you will have a real problem at your end.
Similarly, AWS charges you for total outbound bandwidth used. If you have a spike in activities to a site hosted on AWS or just excessive usage of S3 or some denial of service attack, you could end up with a sizable bill as well. AWS doesn’t care why you had a spike in the traffic.
I have seen a company that used to pay just about $500 a month to have software on some dedicated servers, ended up having a bill of several thousands of dollars because someone provisioned wrong instances just to test, and left them running.
AWS does allow you to set up billing alerts. Amazon CloudWatch allows you to use your projected monthly bill as a metric for alerts. You can have a notification sent to you when it exceeds a preset dollar amount.k
For your own protection, do set spending alerts. It is better to catch a runaway server in time rather than deal with a shock of humongous bill on your credit card.
3. AWS Security and Managing Server Access Private Keys
I have written about cloud security before and security in the cloud installations is a serious matter. It is not left for novices or you will be repenting later when all data is stolen by hackers..
Use Multi-factor Authentication
If you are a large organization with many people, the first and the most important is to protect your AWS user accounts with multi factor Authentication (MFA).
AWS supports virtual clients on Android, iPhone and Blackberry, or you can buy a hardware token device.
AWS also supports hardware token devices from Gemalto. An One Time Password (OTP) token that provides a simple solution for secure remote access with strong authentication, Two options are available. One at 12.99 (key Fob device) and other at 19.99 (Display Card) and exclusively designed for use with Amazon Web. Both will last about 3 years.
I personally like the virtual device option and would recommend that.
Managing Private Server Keys
By default, Linux instances in EC2 use SSH key files for authentication instead of SSH usernames and passwords. Using key files can reduce the chance of somebody trying to guess the password to gain access to the instance.
If you have large number of instances, creating separate SSH access keys for each instance is out of question. The management of those keys is a nightmare.
I would suggest create one key for each project, or application group. For instance, have a key for media servers, another key for database servers, and another key for production software. You must plan some way of managing these keys and based on your access needs.
In general, it is a good idea to use unique SSH keys for unrelated projects. This way you can control access between projects without sharing all the keys with everyone. Many people suggest that if you have multiple people sharing access to the same servers, each person should have their own unique SSH keys.
If you use PCs then put all your keys in the .ssh file and “putty” can pick the keys from there.
If you use a Mac, it comes with built in client with ssh support and all you need to do is:
% ssh -i .ssh/my_servergroup-1_key.pem user@servername.aws.com
You can add your private keys to the keychain application by using the ssh-add command with the -K option and the .pem file for the key, as shown in the following example. The agent prompts you for your passphrase, if there is one, and stores the private key in memory and the passphrase in your keychain.
ssh-add -K myPrivateKey.pemEnter passphrase for myPrivateKey.pem:Passphrase stored in keychain: myPrivateKey.pemIdentity added: myPrivateKey.pem (myPrivateKey.pem)
Adding the key to the agent lets you use SSH to connect to an instance without having to use the –i <keyfile> option when you connect.
DO NOT LOSE PRIVATE KEYS AND KEEP THEM SAFELY!
Amazon allows you to generate your server key pairs when you instantiate an instance. This is the only time you get to download the private key for getting access to the system. If you lose these keys, well you have lost the access to the server.
There are some complicated ways to fix the matter, but the process is cumbersome and takes a lot of time. Also, it is not guaranteed to work in every instance. So, you are risking the loss of data and applications.
4. Understanding Amazon Virtual Private Clouds (VPCs) to Enhance Data and Application Security
Amazon Virtual Cloud (VPCs) is a networking feature of AWS that allows you to define a private network for a group of EC2 server instances and other available resources. This is a great and very useful feature and you should make use of it. This feature greatly simplifies fencing off components of your infrastructure and minimizing the externally facing servers since you can control the public access.
Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the Amazon Web Services (AWS) Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including selection of your own IP address range, creation of sub-nets, and configuration of route tables and network gateways. It is a complicated process but the efforts are worth it.
The basic idea of the VPC is to separate your infrastructure into two halves, a public half and a private half. The external endpoints for your applications such as web servers or load balancer are placed in the public half. However, your database servers, internal message queuing systems or any other application components that are internal should be in the private VC. This way the hackers cannot access your components.
Setting Up a Bastion Host for VPC Access
As the number of EC2 instances in your AWS environment grows, so too does the number of administrative access points to those instances.
Depending on where your administrators connect to your instances from, you may consider enforcing stronger network-based access controls. A best practice in this area is to use a bastion host or server. A bastion is a special purpose server instance that is designed to be the primary access point from the Internet and acts as a proxy to your other EC2 instances.
If you create a VPC, to access internal components in the private half, you’ll need a bastion host. This is a dedicated server that will act as an SSH proxy to connect to your other internal components. It sits in the public half of your VPC.
You can use the cheapest t1.micro instance since all this server does it create ssh session to other servers. I have seen some people even install VPN software such as OpenVPN so that they can have quick access to the private part of the network.
As mentioned earlier, Linux instances in EC2 use SSH key files for authentication instead of SSH By usernames and passwords. Using key files can reduce the chance of somebody trying to guess the password to gain access to the instance. But using key pairs with a bastion host can present a challenge—connecting to instances in the private subnets requires a private key, but you should never store private keys on the bastion.
Always remember the following when configuring your bastion:
- Never place your SSH private keys on the bastion instance. Instead, use SSH agent forwarding to connect first to the bastion and from there to other instances in private subnets. This lets you keep your SSH private key just on your computer.
- Configure the security group on the bastion to allow SSH connections (TCP/22) only from known and trusted IP addresses.
- Always have more than one bastion. You should have a bastion in each availability zone (AZ) where your instances are. If your deployment takes advantage of a VPC VPN, also have a bastion on premises.
- Configure Linux instances in your VPC to accept SSH connections only from bastion instances.
One well known solution is to use SSH agent forwarding (ssh-agent) on the client. This allows an administrator to connect from the bastion to another instance without storing the private key on the bastion. This is easy to set up and instructions are available on the Internet.
5. Securely Accessing Your Server Instances
Any server with port 22 (SSH) open to the public internet will get a lot of hacking attempts. Within 24 hours of turning on a such a server you should see a lot of entries in your SSH logs of bots trying to brute force log in. If you only allow SSH key based authentication this will be a pointless exercise but it’s still annoying to see all the entries in the log (ie. it’s extra noise).
One way to avoid this is to whitelist the IP addresses that can connect to your server. If you have a static IP address at your office or if it’s “mostly static” (ex: most dynamic IPs for cable modems and DSL don’t change very often), then you can set up the firewall rules for your servers to only allow inbound SSH access from those IPs. Other IP addresses will not even be able to tell there is an SSH server running. Port scanning for an SSH server will fail as the initial TCP socket will never get established.
Normally this would be a pain to manage on multiple servers, but by using a bastion host this only needs to be done in one place. Later on if your IP address changes or you need to connect to your server from a new location (ex: on the road at a hotel), then just add your current IP address to the whitelist. When you’re done, simply remove it from the list.
6. Best Practices for AWS Security
Since the AWS eliminates any on-site hardware, security becomes of paramount importance. Fortunately, AWS team is aware of it and provides a host of services to tackle major known security issues, and to help add levels of security to your data. Ultimately though, it is the uses’s responsibility to ascertain the confidentiality, integrity and availability of their data according to their business requirements. Some best practices for data security are highlighted below:
Develop and Setup Proper Resource and User Policies
Once a user has been authenticated, you can control the resources they have authorization over using the resource policies or capability policies. Resource policies become attached to the resource, and contain within them the rules of what can be done with the resource. The capability policies are user specific. They control what the user has permission to do, either directly or indirectly through an IAM (Identity and Access Management) group.
Use these for determining company–wide access policies, as they can override the Resource policies. IAM policies are flexible. You can choose to restrict access to a specific source IP address range, and even change it based on the different days or times during the day you want them to have a suitable level of security.
Managing Encryption Keys
Any security measure that involves encryption requires a key, and AWS provides a number of options to keep that key secure. It is essential that the keys be stored in cryptographic tamper-proof storage, and AWS provides such an HSM (Hardware Security Module) service in the cloud itself, known as AWS CloudHSM.
If you would prefer to store the keys on the premises, make sure you access them over secure links such as the AWS Direct Connect with IPSec. It is advisable to replicate CloudHSMs in varied Availability Zones for higher resilience and ready availability.
Protecting Data at Rest and in Transit in AWS
As in all security sectors, your best friends are permissions. Restrict access on a need-to-know basis with permissions. Encrypt your data and perform Data Security Checks such as MACs (Message Authentication Codes) and HMACs (Hashed MACs) to ensure that the data integrity is not compromised, be it with malicious intent or harmless mistakes.
Use versioning in the S3 and backup your data for restoration if some fault is detected. The Amazon DynamoDB provides automatic data replication between geographically separate Availability Zones to ensure data backup in case of compromise or natural disasters.
The same will apply for data in transit, but since the cloud communicates over the internet for data transfer, add security measures for the protection of communication channels. Also, use SSL/TLS with server certificate authentication to ensure that the remote end is not an imposter or attacker.
Managing Old Data
In AWS, the physical media storing the data is not decommissioned. Instead, the storage units are marked as unallocated. Once the data has reached the end of its usefulness, the AWS wipes out your data. If you require further controls into the decommissioning process and want to ascertain that your data is irrecoverable, you could implement data encryption using customer managed keys, which are not stored in the cloud. Once the data is decommissioned in the AWS, you can delete your key, thus wiping out data in its entirety.
7. Final Words on AWS Migration
AWS Cloud or for that matter any other cloud such as OpenStack is not for everyone and not every application or organization needs to create a cloud based infrastructure. Many applications will run better on dedicated instances of servers and lot more cheaply, safely and securely. After all, a cloud server is just a single virtual instance created on a large dedicated server.
A good engineering team can set up dedicated instances of servers that will easily out perform any cloud configuration. However, one benefit that is simply unmatched is the ability to auto scale very quickly. You can replicate new instances in a matter of minutes.
If you need guidance or consulting on how to migrate your current applications to Amazon Cloud, feel free to contact me.