Читать книгу CompTIA Cloud+ Study Guide - Ben Piper, David Higby Clinton - Страница 30

Answers to Assessment Test

Оглавление

1 D. On-demand cloud computing allows the consumer to add and change resources dynamically with the use of an online portal.

2 B. The interconnection of multiple cloud models is referred to as a hybrid cloud.

3 C. Resource pooling is the allocation of compute resources into a group, or pool, and then these pools are made available to a multitenant cloud environment.

4 A. Infrastructure as a service offers computing hardware, storage, and networking but not applications.

5 B. Platform as a service offers computing hardware, storage, networking, and the operating systems but not the applications.

6 A, B, E. Elasticity, on-demand computing, and pay-as-you-grow are all examples of being able to expand cloud compute resources as your needs require.

7 B, D. One of the prime advantages of cloud-based computing and the automation and virtualization it offers in the background is the ability to leverage the rapid provisioning of virtual resources to allow for on-demand computing.

8 C. Software as a service offers cloud-managed applications as well as the underlying platform and infrastructure support.

9 C. The shared responsibility model outlines what services and portions of the cloud operations the cloud consumer and the provider are responsible for.

10 A. Cloud operators segment their operations into regions for customer proximity, regulatory compliance, resiliency, and survivability.

11 D. A storage area network (SAN) is a high-speed network dedicated to storage transfers across a shared network. Block access is not a networking technology. Zoning is for restricting access to LUNs in a SAN, and VMFS is a VMware filesystem.

12 B, D, F. A hypervisor will virtualize RAM, compute, and storage; the VMs operating on the hypervisor will access these pools.

13 C. A private cloud is used exclusively by a single organization.

14 C. Authentication is the term used to describe the process of determining the identity of a user or device.

15 C. Storage area networks support block-based storage.

16 A, C, E. Application programming interfaces, command-line interfaces, and GUI-based interfaces are all commonly used tools to migrate, monitor, manage, and troubleshoot cloud-based resources.

17 D. A community cloud is used by companies with similar needs such as railroad companies.

18 D. RAID 5 uses parity information that is striped across multiple drives, which allows the drive array to be rebuilt if a single drive in the array fails. The other options do not have parity data.

19 B. When migrating a server that is running on bare metal to a hypervisor-based system, you would be performing a physical-to-virtual migration.

20 D. Multifactor authentication systems use a token generator as something you have and a PIN/password as something you know.

21 B. Two-factor authentication includes something you have and something you know.

22 A. The mandatory access control approach is implemented in high-security environments where access to sensitive data needs to be highly controlled. Using the mandatory access control approach, a user will authenticate, or log into, a system. Based on the user's identity and security levels of the individual, access rights will be determined by comparing that data against the security properties of the system being accessed.

23 C. The question outlines the function of a role-based access control approach.

24 B. The Department of Defense Information Assurance Certification and Accreditation Process (DIACAP) is the process for computer systems' IT security. DIACAP compliance is required to be certified to meet the U.S. Department of Defense security requirements for contractors.

25 B. The platform-as-a-service model offers operating system maintenance to be provided by the service provider.

26 B. Single sign-on allows a user to log in one time and be granted access to multiple systems without having to authenticate to each one individually.

27 B. The security policy outlines all aspects of your cloud security posture.

28 C. IPsec implementations are found in routers and firewalls with VPN services to provide a secure connection over an insecure network such as the Internet.

29 B. The Health Insurance Portability and Accountability Act defines the standards for protecting medical data.

30 C. Advanced Encryption Standard is a symmetrical block cipher that has options to use three lengths, including 128, 192, and 256 bits. AES 256 is a very secure standard, and it would take an extremely long time and a lot of processing power to come even close to breaking the code.

31 C, D. Temporary storage volumes that are destroyed when the VM is stopped are referred to as ephemeral or nondurable storage.

32 C. Applying security applications on a virtual server will cause an increase in CPU usage.

33 C. A dashboard is a graphical portal that provides updates and an overview of operations.

34 C. Ultimately the responsibility for data in the cloud belongs to the organization that owns the data.

35 C. An application programming interface (API) offers programmatic access, control, and configuration of a device between different and discrete software components.

36 C. Automation of cloud deployments was instrumental in the growth of cloud-based services.

37 C. Intrusion prevention systems monitor for malicious activity and actively take countermeasures to eliminate or reduce the effects of the intrusion.

38 B, D. One-time numerical tokens are generated on key fob hardware devices or smartphone soft-token applications.

39 B. SSL/TLS is most commonly used with web and smartphone applications. MD5 is a hash algorithm. IPsec is used to create VPNs over a public network, but VPNs are not as common as SSL/TLS for the scenario given.

40 C. Based on the information given, the description is for a vendor-based management application.

41 B. A patch is a piece of software that updates an application or operating system, to add a feature, fix a bug, or improve performance.

42 C. Blue-green is a software deployment model that uses two configurations for production that are identical to each other. These deployments can alternate between each other, with one active and the other inactive.

43 C. Incremental backups are operations based on changes of the source data since the last incremental backup was performed.

44 B. A snapshot is a file-based image of the current state of a VM, including the complete operating system and all applications stored on it. The snapshot will record the data on the disk and optionally its memory contents at that instant in time.

45 C. Orchestration systems enable large-scale cloud deployments by automating operations.

46 A, C, E. Common automation offerings are Chef, Puppet, and Ansible.

47 B. A rolling configuration will sequentially upgrade the web servers without causing a complete outage and would meet the requirements outlined in the question.

48 C. Cloning takes the master image and clones it to be used as another separate and independent VM. Important components of a server are changed to prevent address conflicts; these include the UUID and MAC addresses of the cloned server.

49 D. The manager is requesting data on the results of the quality assurance testing on the release.

50 A. A hotfix is a software update type that is intended to fix an immediate and specific problem.

51 B. Moving inactive data to a separate storage facility for long-term storage is called archiving.

52 A. The hot site model is the most viable option given the requirements. A hot site is a fully functional backup site that can assume operations immediately should the primary location fail or go offline.

53 B. Asynchronous replication is when data is written to the primary first, and then later a copy is written to the remote site on a scheduled arrangement or in near real time.

54 B. Edge locations are not complete cloud data centers. There are cloud connection points located in major cities and offer local caching of data for reduced response times.

55 C. The recovery time objective is the amount of time a system can be offline during a disaster; it's the amount of time it takes to get a service online and available after a failure.

56 B. A warm site approach to recovering from a primary data center outage is when the remote backup of the site is offline except for critical data storage, which is usually a database.

57 E. A cold site is a backup data center provisioned to take over operations in the event of a primary data center failure, but the servers and infrastructure are not deployed or operational until needed.

58 B. The restore point objective is the point in time that data can be recovered.

59 A. Synchronous replication offerings write data to both the primary storage system and the replica simultaneously to ensure that the remote data is current with local replicas.

60 B, C. The restore point and restore time objectives are the measurements for the amount of data lost and the time needed to get back online after an outage.

61 C. Cloud automation systems offer the ability to add and remove resources dynamically as needed; this is referred to as elasticity.

62 B. With the PaaS model, the cloud provider will maintain the operating system and all supporting infrastructure.

63 C. The higher up the services stack you go, from IaaS to PaaS to SaaS, the more difficult it will be to migrate. With IaaS, most of the cloud operations are under your direct control, which gives you the most flexibility to migrate. However, if the cloud provider controls the application, you may not have many migration options.

64 A, B, D. Cloud computing operates with a utility business model that charges you only for the resources that you consume. This model enables you to scale your cloud fleet to meet its current workload and be able to add and remove capacity as needed. There are many options to use elasticity to scale cloud operations, including vertical and horizontal scaling and bursting.

65 B. Scaling up, or vertical scaling, will add resources such as CPU instances or more RAM. When you scale up, you are increasing your compute, network, or storage capabilities.

66 C. The establishment of average usage over time is the data that gets collected for a baseline report.

67 B. Cloud bursting allows for adding capacity from another cloud service during times when additional resources are needed.

68 B. The measurement of the difference between a current reading and the baseline value is referred to as the variance.

69 C. Change management includes recording the change, planning for the change, testing the documentation, getting approvals, evaluating and validating, writing instructions for backing out the change if needed, and doing post-change review if desired.

70 C. The ability to disable an account can be helpful in situations where the account will need to be reactivated at a future date and does not need to be deleted.

71 B, E, F. Trends, usage, and deficiencies are all management report outputs that can be identified using object tracking.

72 A, D, E. CPU, RAM, and network utilization are all important objects to manage for capacity and utilization tracking. Storage volume tiers and OS versions do not apply to this scenario.

73 D. Objects are queried to gather metric data.

74 B. Tracking object performance data should match with the guaranteed levels outlined in the service level agreement.

75 C. A dashboard is a configurable graphical representation of current operational data.

76 B. If a server is using all of its network bandwidth, then the most logical solution is to increase the network adapters' bandwidth or add a second adapter and create a teaming configuration.

77 A. Horizontal scaling is the process of adding servers to a pool for increased capacity. Round-robin is a load-balancing metric and does not apply. Elasticity is the ability to add and remove resources, autoscaling is the automated process of adding and removing capacity, and vertical scaling is expanding a server.

78 E. Vertical scaling is the process of upgrading or replacing a server with one that has greater capabilities.

79 A, C, D. Server performance can be increased by adding CPU processing, memory, and network capacity. SLA, ACL, and DNS are not related to increasing server capacity.

80 D. Cloud reports are formatted collections of data contained in the management or monitoring applications.

81 B. The cloud service provider owns its automation and orchestration systems, and they cannot be directly accessed by the customer.

82 C. It's common for batch processing to be performed on database applications.

83 B. A large number of users downloading a new application would cause an increase in network bandwidth usage.

84 D. A baseline measurement is used as a reference to determine cloud capacity increases and decreases.

85 C. The Domain Name System records need to be changed to reflect the new IP address mapped to the domain name.

86 C. Databases read and write requests utilize storage I/O and should be the focus for troubleshooting.

87 C. Elasticity allows for cloud services to expand and contract based on actual usage and would be applicable to increasing storage capacity.

88 C. Workflow applications track a process from start to finish and sequence the applications that are required to complete the process.

89 A, C, D. Resources such as the amount of RAM needed, CPU cycles, and storage capacity are common systems that may become saturated as your cloud compute requirements grow.

90 B, C. In addition to the web servers, IP addresses may be required for the DNS server and the default gateway.

91 B. The question is asking about being able to access a specific cloud service. This would concern Jill having the authorization to access the storage volume. Authentication and SSO are login systems and not rights to services. A federation links user databases.

92 A. The tracert and traceroute commands are useful for network path troubleshooting. These commands show the routed path a packet of data takes from source to destination. You can use them to determine whether routing is working as expected or whether there is a route failure in the path. The other options are all incorrect because they do not provide network path data.

93 B, D. The Windows command-line utility nslookup resolves domain names to IP addressing. The Linux equivalent is the dig command. The other options are not valid for the solution required in the question.

94 B. The Windows Remote Desktop Protocol allows for remote connections to a Windows graphical user desktop.

95 C. The tcpdump utility allows a Linux system to capture live network traffic, and it is useful in monitoring and troubleshooting. Think of tcpdump as a command-line network analyzer. The dig and nslookup commands show DNS resolution but do not display the actual packets going across the wire. netstat shows connection information and is not DNS-related.

96 E. In a data center, terminal servers are deployed and have several serial ports, each cabled to a console port on a device that is being managed. This allows you to make an SSH or a Telnet connection to the terminal server and then use the serial interfaces to access the console ports on the devices to which you want to connect. The other options do not provide serial port connections.

97 C. Infrastructure security is the hardening of the facility and includes the steps outlined in the question, including nondescript facilities, video surveillance, and biometric access.

98 C, E. Common remote access tools include RDP, SSH, and terminal servers. IDSs/IPSs are for intrusion detection, and DNS is for domain name–to–IP address mappings and is not a utility for remote access.

99 C. A secure Internet-based connection would be a VPN.

100 A. Logging into systems is referred to as authentication. Also, the question references multifactor authentication (MFA) as part of the system.

CompTIA Cloud+ Study Guide

Подняться наверх