Читать книгу AWS Certified SysOps Administrator Official Study Guide - Cole Stephen - Страница 19

Chapter 2
Working with AWS Cloud Services
Introduction to AWS Cloud Services

Оглавление

As a qualified candidate for the AWS Certified SysOps Administrator – Associate certification, it’s not enough to read the guide – you need to get your hands dirty by digging in. This chapter provides you with a starting point for using several AWS tools that will help you be successful as you learn how to use the cloud in a more effective manner.

Systems Operations Using the AWS Toolset

It’s likely that you are familiar with the AWS Management Console, the web-based interface to AWS Cloud services. In this study guide, we won’t spend much time instructing you on the basics of the AWS Management Console. You’ve probably been using it already, and we believe there is more value in instructing you, the systems operator, in the tools that will allow you to integrate AWS functionality into the scripting environments in which you are already an expert.

There are several AWS-provided tools available for customers to create, maintain, and delete AWS resources at the command line or in code: the AWS Command Line Interface (AWS CLI), AWS Tools for PowerShell, and AWS Software Development Kits (SDKs). Understanding these tools is an essential part of an effective cloud operations team’s automation and scripting toolkit.

Installing the AWS CLI

To find instructions on how to install the latest version of the AWS CLI, navigate to http://aws.amazon.com/cli in a web browser. For Windows, you’ll download and install the 32-bit or 64-bit installer that is appropriate for your computer. If you’re using Mac or Linux and have Python and pip installed, installing the latest version of the AWS CLI is as simple as running pip install awscli.

Upgrading the AWS CLI

Upgrading the AWS CLI on a Linux or Mac computer is as simple as running pip install – upgrade awscli. For Windows users, you’ll have to download the latest installer and install the latest version.

You should follow the AWS Security Bulletins page at https://aws.amazon.com/security/security-bulletins/ to stay aware of security notifications about the AWS CLI.

Configuration

After installing the AWS CLI, run aws configure to configure it with your credentials. Specifically, you will need an access key and secret key created for your AWS Identity and Access Management (IAM) user. Optionally, you can set a region (for example, us-east-1) and a default output format (for example, JSON) after entering your access key and secret key. The aws configure Command Options are shown in Table 2.1.


TABLE 2.1 The aws configure Command Options

Safeguard your access key and secret key credentials as you would a user name and password for the AWS Management Console. Safeguarding these credentials is crucial to help prevent unauthorized access to your AWS infrastructure.

If you ever believe that your credentials are compromised, you should inactivate them immediately.

You can also create multiple profiles by appending -profile profile-name to the aws configure command. This can be handy in a number of different situations. You may want to have separate profiles with separate privileges for development, testing, and production environments. You could also create unique profiles for multiple accounts that you need to access. Creating different profiles will allow you to execute commands using different configurations for each.

After you’ve run aws configure, your credentials are stored in ~/.aws/credentials on Mac or Linux, or in %UserProfile%\.aws/credentials on Windows. Your other configuration parameters are stored in ~/.aws/config on Mac or Linux, or in %UserProfile%\.aws/config on Windows. The AWS CLI will look in these locations for the credentials and configuration information each time it is called to execute a command.

This chapter has only started covering the configuration options for the AWS CLI. AWS provides you with the ability to specify a Multi-Factor Authentication (MFA) device to use with your credentials, an Amazon Resource Name (ARN) corresponding to a role that you want to assume for cross-account access, and more. Find out more details on the configuration options available by running aws help config-vars.

Environment Variables

You can specify configuration parameters using environment variables as well, as listed in Table 2.2. This ability can come in handy for making swift changes in scripts or on a temporary basis from the command line.


TABLE 2.2 Environment Variables


How you change the variable depends on the shell you are using. In the bash shell, which is most commonly the default on Linux and Mac systems, you use the format export environment_variable=option to set the new variable.

Getting Help on the AWS CLI

You can add the option help to the end of nearly every AWS CLI command to determine a list of available options. For example, executing aws help will return a list of all of the services available to use as options. Running aws s3 help will return a list of the valid parameters to pass as part of a command-line call to Amazon Simple Storage Service (Amazon S3).

Autocompletion

Support for tab completion– the ability to start typing a command and have a list of valid options to complete your command appear when you press tab – is a feature built into the AWS CLI but not enabled by default. You can enable autocompletion for the bash shell (Linux or Mac) by typing complete – C aws_completer aws.

Source Code

AWS makes the AWS CLI source code available within the terms of the Apache License, Version 2.0. If you remain within the license, you can review the code before using it or adapt it into a new tool for your own project. There is an active community involved with the source code in which you are encouraged to participate. Find the code and more information on the user community at https://github.com/aws/aws-cli.

AWS publishes code on GitHub, not only for the AWS CLI, but also for the AWS SDKs and many other tools. Doing so helps give customers access to the code behind these tools to help them incorporate the code into their projects and spend more time innovating rather than creating building blocks. Take a look at some of the tools available at https://github.com/aws/.

Working with Services

Executing an AWS CLI command is as simple as typing aws and then a command string followed by a list of options.

The format of your command will generally take the form of the following:

aws service parameter1

parameter2 … parameterN

For example, aws ec2 describe-instances will return a list of your Amazon Elastic Compute Cloud (Amazon EC2) instances, along with their properties, running in your configured region. aws s3 ls s3://mycertification/ will return an object listing of an Amazon S3 bucket you own named mycertification.

Output Types

In the Configuration section, we mentioned that you can represent the data retrieved using the AWS CLI in three output formats: “JSON,” “text,” or “table.” Each format can provide a number of benefits to the user depending on the use case in question.

JSON is the default format, and it provides data in a form that is easily parsed and ingested by applications. This format is commonly used in other AWS Cloud services (for example, AWS CloudFormation), and it is a standard in which operations personnel should become well versed if they want to excel. Text output allows the operator to output data in a tab-delimited format that can be parsed by tools like grep and other text parsers. (If you happen to be a Linux systems administrator, you’re likely very familiar with this tool.) Table format is often more easily human readable than JSON or text.

Avoiding Unwieldy Lines

As you gain more experience using the AWS CLI, you will find that your command lines can become increasingly difficult to manage effectively as your parameters become more complex. There are several strategies to deal with this problem.

First, in Linux or Mac, you can use the backslash character to separate a command into several lines. For example, this command:

aws rds download-db-log-file-portion – db-instance-identifier awstest1 – log-file-name "error/postgres.log"

is equivalent to the following command, parsed with backslashes:

aws rds \download-db-log-file-portion \-db-instance-identifier awstest1 \-log-file-name "error/postgres.log"

Using backslashes makes the command more easily comprehensible to a human reader, thus assisting with troubleshooting when errors occur.

Next, some AWS CLI commands take a JSON-formatted string as part of the input. For example, the aws ec2 create-security-group command has a parameter -cli-input-json that takes a JSON-formatted string as an input. As an alternative to entering the string via the command line, you can refer to a local file as follows:

aws ec2 create-security-group – cli-input-json file://filename.json

where filename.json is the file containing the JSON string.

Additionally, you can store the JSON string as an object in Amazon S3 or another web-hosted location and access the file as a URL:

aws ec2 create-security-group \-cli-input-json \https://s3.amazonaws.com/cheeeeessseeee/filename.json

This gives you the ability to reuse more easily the JSON string that you’ve created for one environment in another.

Using query to Filter Results

As you explore using the AWS CLI, you will find that there is a wealth of information about your AWS environment that can be retrieved using the tool. Command-line output is comprehensive. Running the command aws ec2 describe-instances returns dozens of values describing each instance running: InstanceId, PublicDnsName, PrivateDnsName, InstanceType, and much more. There are times when you don’t want to return all of those values, though. What do you do if you want to retrieve only a list of the Amazon Machine Image (AMI) IDs that your instances are running so that you can make sure that your fleet is running your preferred image?

That’s where the -query option comes in. This option allows you to filter results so that only the output with the parameters you specify are returned. Query uses the JMESPath query language as its input for filtering to the results you specify.

You can find a tutorial for the JMESPath query language at http://jmespath.org/tutorial.html.

Here are some examples of query in practical use cases. Perhaps you want to obtain the metadata for your Amazon Relational Database Service (Amazon RDS) instances, but only those that are running in the us-east-1e Availability Zone:

aws rds describe-db-instances \ – query 'DBInstances[?AvailabilityZone==`us-east-1e`]' \ – output text

Maybe you want a list of your AWS IoT things that are Intel Edison devices:

aws iot list-things – query 'things[?thingTypeName==`IntelEdison`]' – output text

Or maybe you’ve been tasked with identifying a list of the instances with their associated instance type that are running in your environment so that they can be targeted as candidates for upgrades to newer generation types:

aws ec2 describe-instances \ – query 'Reservations[*].Instances[*].[InstanceId, LaunchTime, InstanceType]' \ – output text

That last one is a bit different than what we’ve executed in the previous examples. Note that we are working our way down the JSON hierarchy. First we specify that everything under Reservations and then everything under Instances is in scope for our query (the * character works as our wildcard here). In the final set of brackets, we specify what specific fields at that level we want to return – InstanceId, LaunchTime, and InstanceType in this example, allowing us to see only which fields are useful to us for our task.

Query can be a powerful tool. However, output can vary among the resources you list using the AWS CLI (differing fields may be present in your output based on a number of variables). Accordingly, it’s recommended that you rely on text format for any outputs that you run through query; you can see that we’ve added that output parameter to the queries here. Additionally, using text format makes it easier to use tools like grep on the output.

AWS Tools for Windows PowerShell

To this point, we’ve been focusing on the AWS CLI tool in our discussion of how a systems operator can effectively administer a customer’s cloud resources from the command line. Because this tool works across operating systems, the AWS CLI provides an effective way to administer across various shells.

There is, however, a notable contingent of IT professionals whose favorite command-line shell is Windows PowerShell. To serve those customers who prefer PowerShell, we have provided a full-featured tool for that environment called AWS Tools for Windows PowerShell. Although we will not dive into this tool in this book, if you love PowerShell, you can find more information at https://aws.amazon.com/powershell/.

AWS Software Development Kits (SDKs)

AWS provides a number of SDKs for use by programmers. Although we don’t expect that a systems operator would use an SDK directly on a regular basis, as a knowledgeable AWS resource, it’s important that you understand that the SDKs and the underlying APIs they use exist, and that you have some general knowledge about how they are used.

There are a few reasons for this. For one thing, some of these languages – Python, for example – straddle the lines between programming languages that developers use to compile executable code and scripting languages that administrators use to perform infrastructure tasks. That leads into the next reason why we’re talking about SDKs: The line between development and operations is increasingly blurry. As operations and development responsibilities merge into the new world of DevOps, it’s important for those in charge of operations to understand the basics of how applications integrate with infrastructure.

AWS Certification Paths

There are three paths that an AWS Certification candidate can take toward Professional status: Architecting, Developing, and the one you’re focusing on by reading this book, Operations. It’s worth noting that while the Architecting path has its own professional certification (the AWS Certified Solutions Architect – Professional), the Developing and Operations paths share the same professional credential: the AWS Certified DevOps Engineer certification.

As the differentiation between Development and Operations becomes increasingly blurry, it’s important for both groups to understand what the other does on a daily basis. Hence, the SysOps and Developer paths merge at the Professional level.

It’s through the AWS SDKs and the APIs that underlie them that applications built on AWS can manage infrastructure as code. The concept of infrastructure as code is powerful, disruptive, and sets the cloud apart from the old IT world.

At the time this book was written, AWS SDKs were available for the following programming languages:

■ Android

■ Browser (JavaScript)

■ iOS

■ Java

■ .NET

■ Node.js

■ PHP

■ Python

■ Ruby

■ Go

■ C++

There are also two purpose-specific SDKs:

■ AWS Mobile SDK

■ AWS IoT Device SDK

The language-specific SDKs contain APIs that allow you easily to incorporate the connectivity and functionality of the wider range of AWS Cloud services into your code without the difficulty of writing those functions yourself. Extensive documentation accompanies each SDK, giving you guidance as to how to integrate the functions into your code.

We focus on the AWS SDK for Python as our reference SDK for this chapter.

Boto

The AWS SDK for Python is also known as Boto. Like the other AWS SDKs and many of our tools, it is available as an open source project in GitHub for the community to view freely, download, and branch under the terms of its license. There is an active Boto community, including a chat group, which can help answer questions. Let’s get started by installing Boto and jump right into using it.

AWS and Open Source

AWS has been committed to the idea of open source software since day one. Open source code allows customers to review code freely and contribute new code that is optimized or corrected. AWS not only uses open source software, such as Xen, SQL, and the Linux operating system, but often contributes improvements to various open source communities.

Installing Boto

Given that Boto is an SDK for Python, it requires Python to be installed prior to its own installation. The method of doing so depends on the operating system involved. You can find more information about installing Python at http://www.python.org/. Another prerequisite is pip, a Python tool for installing packages, which can be found at https://pip.pypa.io/.

After installing Python and pip, you install Boto using the following command:

pip install boto3

It’s worth noting the boto3 at the end of the install command. The current version of the Boto SDK is 3. Although Boto 2 is still in use, we highly encourage customers to use Boto 3. Throughout the rest of this chapter, when we refer to “Boto,” we are referring to Boto 3.

By default, Boto uses the credential files that you established in setting up the AWS CLI as its own credentials for authenticating to the AWS API endpoints.

Features of Boto

Boto contains a variety of APIs that operate at either a high level or a low level. The low-level APIs (Client APIs) are mapped to AWS Cloud service-specific APIs. The details of how to use the low-level APIs are found in the Boto 3 documentation at https://boto3.readthedocs.io/en/latest/guide/clients.html. Although the low-level APIs can be useful, we suspect that those involved in systems operations will not often need to dig into the specifics of their use.

The higher-level option, Resource APIs, allows you to avoid calling the network at the low level and instead provide an object-oriented way to interact with AWS Cloud services. We’ll cover the use of Resource APIs in more detail next.

Boto also has a helpful feature called the waiter. Waiters provide a structure that allows for code to wait for changes to occur in the cloud. For example, when you create a new Amazon EC2 instance, there is a nominal amount of time until that instance is ready to use. Having your code rely on a waiter to proceed only when the resource is ready can save you time and effort.

There is also support for multithreading in Boto. By importing the threading module, you can establish multiple Boto sessions. Those multiple Boto sessions operate independently from one another, allowing you to maintain a level of isolation between the transactions that you’re running.

These are just a few of the features Boto offers. For an in-depth look at these features, or to learn more about other features available in this SDK, refer to the Boto General Feature Guides at https://boto3.readthedocs.io/en/latest/guide/index .html#general-feature-guides.

Fire It Up

If you want to use Boto in your Python code, you start by importing the Boto SDK:

import boto3

And, if you’re using Interactive mode, you then press Enter.

Using Resource Application Programming Interfaces (APIs)

To start using the Boto class in Python, invoke it by calling boto3.resource and passing in a service name in single quotes. For example, if you wanted to perform an action using Amazon EC2, you would execute something similar to the following:

ec2 = boto3.resource('ec2')

You now have an object called ec2, which you can use to act on Amazon EC2 instances in your AWS account. You can then instantiate a method object pointing to a specific instance in your Amazon Virtual Private Cloud (Amazon VPC):

myinstance = ec2.Instance('i-0bxxxxxxxxxxxxxxx')

Perhaps you want to stop an Amazon EC2 instance programmatically, possibly at the end of a shift to save money when it’s not being used. You can then issue a command to stop that instance:

myinstance.stop()

You can start the instance back up again automatically prior to the beginning of the next shift:

myinstance.start()

Acting on infrastructure resources isn’t your only option with Boto. You can also retrieve information from the APIs about your resources. For example, perhaps you want to find out what AMI is being used for the instance in question. You can do that by passing the following command:

instance.image_id

A string is returned for this command containing the AMI ID of the named instance.

AWS Internet of Things (IoT) and AWS Mobile Software Development Kits (SDKs)

We’ve been covering the language-specific AWS SDKs, which focus on the management of many AWS Cloud services. There are two purpose-specific SDKs as well: The AWS IoT Device SDK and the AWS Mobile SDK. Like the general-usage SDKs that we covered previously, these purpose-specific SDKs also provide developers with the ability to use prebuilt libraries that make it easier for them to focus on innovation in their code, not in how they connect to the infrastructure that runs it.

The IoT and Mobile SDKs are different because they are purpose-built to streamline connecting physical devices, like phones and tablets or sensors and hubs, to the cloud. The SDKs are provided for a variety of languages commonly used on their respective platforms. At the time this book was written, these SDKs were available for the following languages/platforms:

AWS Mobile SDK

■ Android

■ iOS

■ Unity

■ .NET

■ Xamarin

■ React Native

AWS IoT Device SDK

■ Embedded C

■ JavaScript

■ Arduino Yún

■ Java

■ Python

■ iOS

■ Android

■ C++

Like the SDKs previously discussed, many of these SDKs provide their source code in GitHub. Each contains extensive documentation and helpful sample code to allow developers to get up and running quickly.

AWS Certified SysOps Administrator Official Study Guide

Подняться наверх