Amazon Web Services

At 18F, we use Amazon Web Services (AWS) as our infrastructure as a service (IaaS). We have separate AWS accounts for our production systems and sandboxes for development and testing. If you’re used to developing locally, you should feel empowered to do everything you’d like in an AWS sandbox account. Note that AWS is currently the only IaaS provider we are able to use in TTS right now. You’re free to develop purely locally as long as you’d like, but if you want to get a system online, AWS and are your only options, of which is preferred.

In particular, you cannot send traffic from the internet to your local machine - you must use a sandbox account for this purpose.


If you are familiar with running virtual machines on your own computer, through Parallels, VirtualBox, or VMWare, AWS operates on the same principles but on a truly massive scale. Pretty much everything in AWS can be orchestrated via the AWS API & command-line interface.

The core service of AWS is the Elastic Compute Cloud (EC2). These are virtual machines just like on your computer, but hosted in the AWS environment.

If you want very basic and cheap object storage, AWS provides the Simple Storage Service (S3).

These are just the concepts necessary for initial on-boarding. AWS has an extensive list of other services.

Building systems that will be deployed directly to AWS

Although is strongly preferred as the production environment for the systems we build, there are some systems that will need to run on AWS. You can see the GSA approval status and caveats for using different AWS services in this spreadsheet.

In order to ensure systems deployed to AWS are robust and reliable, and to ensure the integrity of information stored in AWS, we impose some additional restrictions on systems deployed to the 18F production AWS environment.


Anyone in 18F can get access to the AWS sandbox account. However only the 18F infrastructure team has login credentials to our production 18F account, and they are only used for debugging and incident management purposes. All systems are deployed using a continuous delivery service from scripts stored in version control, and registered with #infrastructure.

This means:

  • All configuration of your production environment must be performed using Terraform scripts checked into version control.
  • There will be no “back channel” access to AWS resources for systems deployed into production. Any routine activities such as data management, import / export / archiving, must be performed through your system.

Auto scale groups

In order to ensure that systems remain available even in the face of hardware failures within AWS leading to VMs being terminated, all EC2 instances must be launched within an auto-scaling group from an AMI.


To ensure logical partitioning of systems running within the 18F production environment, every system must be hosted within its own virtual private cloud (VPC). Network security settings are set at the VPC level, including what ports IP addresses EC2 instances can communicate with each other and back out to the internet.

Occasionally, out-of-date documentation from third parties and Amazon itself may reference EC2 Classic. We at 18F do not support this environment.

HTTPS Everywhere

Regardless of what your system does, we enforce HTTPS Everywhere.

Approved services for production use

Not all AWS services are approved by GSA IT for production use. GSA IT maintains a current list of approved services (note: only visible to GSA employees and contractors).

Operating system (OS) baseline

We use a pre-hardened version of Ubuntu as our baseline OS for all EC2 instances in AWS. These are created using the FISMA Ready project on GitHub. In AWS, there are Amazon Machine Images (AMIs) in each AWS Region with these controls already implemented. You should always launch new instances from this baseline. You can find them by searching for the most recent AMI with the name FISMA Ready Baseline Ubuntu (TIMESTAMP - Packer), where TIMESTAMP will be a timestamp value.

Other people’s information

Any system in AWS might have the public’s information (as opposed to public data) at any time. Some systems use stronger measures to help protect the information if it is sensitive. For example, MyUSA uses row-level encryption. If you are unsure of the sensitivity of the data you’re going to be handling, consult with 18F Infrastructure first.

Use common sense when handling this information. Unless you have permission and need to in order to do your job:

  • Don’t release information
  • Don’t share information
  • Don’t view information

Regardless of your own norms around privacy, always assume the owner of that data has the most conservative requirements unless they have taken express action, either through a communication or the system itself, telling you otherwise. Take particular care in protecting sensitive personally identifiable information (PII).

Your information

In order to make sure we are protecting the integrity of the public systems, you have no expectation of privacy on any federal system. Everything you do on these systems is subject to monitoring and auditing.


Tagging resources in AWS is essential for identifying and tracking resources deployed. A tagged resource makes it easier for reasoning from a billing perspective and aids in determining if a system is in a particular environment (ex. production). See the sandbox environment to see how tagged resources enables lifecycle management of resources in AWS.

At a minimum, an AWS resource must have a Project tag defined with enough information to be able to identify a project that the AWS resource is associated with.