AWS interview quesions FAQ

 AWS solution architect interview quesions FAQ

 

 =====================================

1. How would you model a data warehouse on AWS for a business that has many small, fast-growing datasets?

I would use a data lakehouse architecture. The data would be ingested into a data lake on Amazon S3 in a columnar format like Parquet. I would use AWS Glue to create a data catalog and structure the data. For querying, I would use Amazon Athena for ad-hoc queries and Amazon Redshift Serverless for more demanding analytics, as it scales automatically.


2. Explain how to model data for a search and analytics use case using Amazon OpenSearch Service.

For Amazon OpenSearch Service, the data model should be designed for document search. Each document would represent a record, with fields for indexing. I would use a de-normalized model to make data retrieval fast. I would also model nested fields carefully to ensure proper indexing. The data would be ingested using a service like Amazon Kinesis Firehose or AWS Lambda.


3. How do you approach data modeling for data migration from an on-premise data warehouse to AWS?

I would start by creating a data model that mirrors the on-premise source to ensure data fidelity. Then, I would create an optimized data model for the target AWS service, such as a star schema for Amazon Redshift. The migration process would involve two stages: lift-and-shift to a staging area (like S3), followed by an ETL process to transform the data into the final target model.


4. What are the key considerations for data modeling when using Amazon Redshift Spectrum?

When using Redshift Spectrum, the data model should prioritize partitioning and file format. Data in the S3 data lake should be partitioned on columns used in WHERE clauses (e.g., date, region). Using a columnar file format like Parquet is crucial as Spectrum will only scan the required columns, significantly reducing query time and cost.


5. How would you design a data model for a financial application that requires strong ACID compliance?

For a financial application, I would use a database that provides strong ACID guarantees. Amazon Aurora is a strong candidate for its durability and high availability. For data modeling, I would use a highly normalized relational model to prevent data redundancy and ensure transactional integrity. I would also use primary keys and foreign keys to enforce relationships.


6. Explain how to model an IoT data stream for analysis using AWS IoT Core and a data lake.

The data model for an IoT data stream should be simple and easy to parse. I would model the data as flat JSON or a similar format with a timestamp, device ID, and key-value pairs for sensor readings. AWS IoT Core would route the data to a service like Kinesis Firehose, which would automatically ingest it into an S3 data lake, partitioned by time for efficient querying.


7. What is the role of a data mesh in a data modeling strategy, and how do you implement it on AWS?

A data mesh is a decentralized data architecture where data is owned and managed by domain-specific teams. Data modeling is performed within each domain. On AWS, you would implement this using a central AWS Lake Formation to manage data governance and access. Each domain would have its own S3 buckets and data pipelines, and they would use a shared data catalog to expose their data products.


8. How do you model data for data sharing using AWS Data Exchange?

For AWS Data Exchange, the data model must be well-defined and packaged. The data must be organized into a specific format (e.g., CSV, Parquet) and stored in an S3 bucket. The data model should be clear and well-documented for consumers to understand. The provider would then use the AWS Data Exchange service to publish the dataset and set up subscription terms.


9. How would you model data for a recommendation engine using AWS?

For a recommendation engine, I would use a graph-based or a relational model. A graph model (e.g., using Amazon Neptune) would be ideal for modeling users and their relationships to products. A relational model (e.g., using Redshift) would also work, using a star schema with a fact table for user-product interactions and dimension tables for users and products.


10. How do you handle unstructured data modeling in a data lake on AWS?

For unstructured data like images, audio, or video, the data model is based on metadata. The unstructured data would be stored in an S3 bucket. I would use services like Amazon Rekognition or Amazon Transcribe to extract structured metadata (e.g., object tags, text transcripts) and store it in a separate data lake table. The metadata would then be used for searching and analysis.


11. What is the difference between a key-value data model and a document data model on AWS?

A key-value data model (e.g., in DynamoDB) is a simple structure with a single attribute identifier. It is optimized for retrieving a single item by its key. A document data model (e.g., in DynamoDB as well) stores data in a semi-structured format, allowing for nested attributes and a flexible schema. It is ideal for storing data with varying structures, like user profiles.


12. How do you model a data warehouse on AWS that combines transactional data with external data sources?

I would use a data lakehouse architecture. The transactional data would be loaded from a database like Amazon Aurora into an S3 data lake. External data sources would also be ingested into the same data lake. Using AWS Glue and Redshift Spectrum, I would join the transactional data with the external data in a single query, creating a unified view.


13. What is the purpose of using a denormalized data model in Amazon Redshift?

A denormalized data model in Redshift improves query performance by reducing the number of joins. By duplicating data from dimension tables into the fact table, queries can be executed faster as they don't need to join multiple tables. This is a common practice in data warehousing where read performance is more important than storage efficiency.


14. How would you design a data model for a global e-commerce application using AWS services?

For a global e-commerce application, the data model must be highly available and scalable. I would use a combination of services. Amazon DynamoDB with global tables would handle the real-time transactions. The data would then be streamed into an S3 data lake for analytical purposes. I would use a star schema in a data warehouse like Redshift for business intelligence, allowing for regional and global analysis.


15. Describe a data model for an application that uses a combination of Redis and a relational database on AWS.

I would use a relational database (e.g., Amazon Aurora) for the core data model, ensuring data integrity and transactional consistency. I would use an Amazon ElastiCache for Redis instance as a caching layer for frequently accessed data. The data model in Redis would be a simple key-value pair, with a key representing the entity (e.g., user:123) and a value containing the most up-to-date data for that entity.


16. How do you model data for data governance and access control using AWS Lake Formation?

In AWS Lake Formation, the data model is defined in the AWS Glue Data Catalog. You would create a centralized metadata repository for your S3 data. For access control, you would use Lake Formation's permissions model to define granular access at the database, table, or even column level. This allows you to grant specific permissions to different users or groups in a centralized manner.


17. What are the key differences in data modeling for a data lake vs. a data warehouse, in terms of schema?

A data lake on S3 typically uses a schema-on-read approach, where the schema is inferred at query time. This is flexible and allows for storing diverse data formats. A data warehouse like Redshift uses a schema-on-write approach, where the data is transformed and validated to fit a predefined schema before being loaded. This ensures data consistency and fast query performance.


18. How would you model data for a customer 360-degree view using AWS?

I would use a data lakehouse architecture. The data from various sources (e.g., CRM, e-commerce, social media) would be ingested into an S3 data lake. Using AWS Glue, I would cleanse and join the data, modeling it into a unified customer profile. A dimensional model in Amazon Redshift would then be used for analytics, with a central fact table for customer activity and dimension tables for customer attributes.


19. Describe the data modeling approach for an application that uses AWS Lambda and S3.

For an application using Lambda and S3, the data model is typically file-based. Data is stored as files (e.g., CSV, JSON) in an S3 bucket. A Lambda function would be triggered by an event (e.g., a new file upload) to process the data. The data model within the file would be designed for simple parsing. For structured queries, the data could be partitioned in S3 and queried with Amazon Athena.


20. How would you model a data warehouse on AWS to support both historical data analysis and real-time reporting?

I would use a hybrid approach. For historical analysis, I would use a data warehouse like Amazon Redshift with a star schema, loading data in daily or hourly batches. For real-time reporting, I would use a service like Amazon Kinesis Analytics to perform SQL queries on the streaming data. The Kinesis output could be fed to a dashboard or a low-latency database like DynamoDB for up-to-the-minute reports.

====================================================================

1. How would you design a data model for a multi-tenant SaaS application on Amazon DynamoDB?

For a multi-tenant SaaS application on DynamoDB, I would use a single-table design. The partition key would be a composite key that includes the TenantID. This ensures that data for each tenant is physically isolated. I would use the sort key to define different data types for that tenant, such as USER#<UserID> or PRODUCT#<ProductID>, enabling efficient retrieval of all data for a specific tenant with a single query.


2. Explain how to model a star schema in Amazon Redshift, and what are the key considerations?

To model a star schema in Redshift, you would create fact and dimension tables. The key considerations are choosing the right distribution style and sort keys. The DISTKEY should be set on a column used for joins to the fact table, such as CustomerID or ProductID, to colocate data and minimize data transfer. The SORTKEY should be set on columns frequently used in WHERE clauses, like DateID, to speed up queries.


3. How do you handle a data model with complex many-to-many relationships in AWS?

For complex many-to-many relationships, the best service depends on the use case. For analytical needs, you would use an intermediate fact table in a data warehouse like Amazon Redshift. For transactional or highly connected data, a graph database like Amazon Neptune is the optimal choice. It models data as nodes and edges, making it efficient to traverse and query complex relationships.


4. What is the difference in data modeling between a relational database (Amazon Aurora) and a document database (DynamoDB)?

In a relational database like Aurora, the data is highly normalized into multiple tables with defined schemas and relationships enforced by foreign keys. In a document database like DynamoDB, the data is de-normalized and stored in a single table, with related items grouped together. The schema is flexible and can evolve over time, making it suitable for JSON-like data.


5. Describe how to model a data lake using the "medallion architecture" on AWS.

The medallion architecture has three layers. The Bronze layer stores raw, unchanged data directly from the source, typically in Amazon S3. The Silver layer holds cleaned, validated, and de-duplicated data, often in a columnar format like Parquet. The Gold layer contains aggregated and enriched data, optimized for specific use cases like business intelligence, with data modeled in a star or snowflake schema.


6. How would you model data for a time-series analytics application using Amazon Timestream?

For Amazon Timestream, a time-series database, you model your data with dimensions and a measure. Dimensions are the attributes that define the series, such as device_id or sensor_type. The measure is the actual data point you are tracking over time, like temperature or pressure. Timestream automatically handles partitioning and sorting by time, optimizing for fast ingestion and query performance.


7. What is the purpose of using a sort key in an Amazon DynamoDB data model?

In DynamoDB, the sort key works with the partition key to create a unique identifier for an item. Its main purpose is to organize and group related items, enabling efficient queries on a range of values. This allows you to retrieve multiple items with a single query, which is crucial for a single-table design where different data types are stored under the same partition key.


8. How do you design for schema evolution in a data lake on Amazon S3?

To design for schema evolution, you should use self-describing file formats like Apache Parquet or Apache Avro, which embed schema information. When a schema changes, you can create a new version of the data in a separate folder or partition. Services like AWS Glue Data Catalog will then manage the different schema versions, allowing you to query all data without breaking older queries.


9. What are the key differences in data modeling for Amazon Aurora vs. Amazon Redshift?

Amazon Aurora is a relational database optimized for OLTP (Online Transaction Processing). Its data model is highly normalized, focusing on data integrity and transactional consistency. Amazon Redshift is a data warehouse optimized for OLAP (Online Analytical Processing). Its data model is typically de-normalized (e.g., a star schema), focusing on fast query performance over large datasets.


10. How would you model a data warehouse on AWS that uses a data lake for raw data?

This is a data lakehouse architecture. Raw, unprocessed data is stored in a data lake on Amazon S3. A data modeling layer in the data lake (e.g., using Parquet files and partitioning) structures the data. Finally, a data warehouse like Amazon Redshift can query the structured data directly from the S3 data lake using Redshift Spectrum, combining the flexibility of a data lake with the performance of a data warehouse.


11. Describe how to model data for a graph use case using Amazon Neptune.

For Amazon Neptune, you model data as nodes and edges. Nodes represent entities (e.g., Users, Products), while edges represent the relationships between them (e.g., Follows, Purchased). Each node and edge has properties (attributes) that describe it. This model is highly effective for use cases where relationships and connections, like social networks or recommendation engines, are the primary focus of your queries.


12. How do you model data for data sharing and governance in a multi-account AWS environment?

For data sharing across multiple accounts, you would use a centralized data lake on Amazon S3 with AWS Lake Formation. The data is modeled once, and Lake Formation provides a central place to manage permissions. You can grant access to specific databases or tables to IAM users or roles in other accounts, ensuring controlled access and simplifying governance across the organization.


13. What is the role of a data catalog (AWS Glue Data Catalog) in a data model?

The AWS Glue Data Catalog serves as a central metadata repository for all your data. It stores the schema, location, and other attributes of your tables, allowing different services to "discover" and understand your data. It acts as the backbone for your data model, enabling a single source of truth and making it easy for services like Amazon Athena, Amazon Redshift Spectrum, and Amazon EMR to query your data without manual schema definitions.


14. How would you model data for machine learning feature engineering in an AWS environment?

For feature engineering, the data model needs to be optimized for feature extraction and transformation. I would use an S3 data lake as the central repository for raw data. I would then use AWS Glue or Amazon EMR with Apache Spark to transform the data, modeling it into a flattened, wide table where each row represents an instance and each column a feature. The final dataset would be stored in Parquet format.


15. What are some of the key trade-offs in data modeling for a data lake vs. a data warehouse on AWS?

A data lake on S3 offers a flexible, schema-on-read model, allowing you to store raw data cheaply and handle a variety of formats. However, query performance can be inconsistent. A data warehouse like Redshift uses a schema-on-write model, which provides very fast, consistent query performance but is less flexible and more expensive for raw data storage. The choice depends on the specific use case and performance requirements.


16. How would you design a data model for a real-time leaderboard in a gaming application using AWS services?

For a real-time leaderboard, a key-value data model is most suitable. I would use Amazon DynamoDB with the player ID as the partition key and a score attribute. I would also create a Global Secondary Index (GSI) with the score as the partition key. This allows the application to query the GSI to retrieve the highest scores efficiently, without scanning the entire table.


17. How do you handle dimensional modeling with JSON data in an S3 data lake?

To handle JSON data, you can use a schema-on-read approach. First, store the raw JSON files in an S3 data lake. Then, use AWS Glue to crawl the data and infer the schema, including nested JSON objects. This allows services like Amazon Athena or Amazon Redshift Spectrum to query the data using SQL, treating the nested JSON as a structured column or flattening it with a UNNEST function.


18. Describe a data model for an IoT application that uses Amazon Kinesis and S3.

For an IoT application, you would model data as a stream of events. Devices would send data to a Kinesis Data Stream. A Kinesis Data Firehose would then ingest the data and automatically partition it based on a time key into an S3 data lake. The data would be modeled with a simple, flat structure for easy analysis, with a timestamp and device ID to track each event.


19. How would you model an audit trail or log data that is stored in S3?

For audit trail or log data, a time-series model is most appropriate. The data would be stored in an S3 data lake, partitioned by date (year/month/day). This makes it easy to query data from a specific time range. I would use a columnar format like Parquet for efficient storage and use Amazon Athena to perform ad-hoc analysis on the logs.


20. When would you use a combination of DynamoDB and Redshift in a data model?

You would use DynamoDB for the transactional, operational side of your application, as it is a low-latency, high-throughput key-value store. You would then use Amazon Redshift for the analytical side. Data from DynamoDB would be periodically loaded into Redshift (e.g., using AWS Glue) for complex business intelligence and reporting queries that would be inefficient to run on a transactional database.


=====================================================

 1. How would you design a cost-effective data warehousing solution on AWS?

A cost-effective solution involves using services that scale and charge based on usage. I would use Amazon Redshift Serverless for its pay-as-you-go model. For data ingestion, AWS Glue and AWS Lambda can be used to handle ETL jobs and data processing. Data can be stored in Amazon S3, leveraging its different storage classes (e.g., S3 Intelligent-Tiering) to automatically optimize costs based on access patterns.


2. Explain the purpose of AWS Glue Data Catalog.

The AWS Glue Data Catalog is a persistent metadata store for your data assets. It serves as a central repository for table definitions, schema versions, and other metadata. It allows various AWS services like Amazon Athena, Amazon Redshift Spectrum, and Amazon EMR to discover and query data without needing to define schemas repeatedly. This helps to create a single source of truth for your data and simplifies data governance.


3. What is the difference between Amazon Redshift and Amazon RDS?

Amazon Redshift is a petabyte-scale data warehouse service optimized for complex analytical queries on large datasets. It uses columnar storage and massively parallel processing (MPP) architecture. Amazon RDS (Relational Database Service) is a managed service for relational databases (like MySQL, PostgreSQL, etc.) designed for online transaction processing (OLTP) and transactional workloads.


4. How do you handle schema evolution in a data lake?

To handle schema evolution, I would use file formats like Parquet or Avro which store schema information within the file itself. I would also version my data, creating separate folders for different schema versions. Using a service like AWS Glue with a Data Catalog helps manage these schema changes by allowing you to add, remove, or modify columns without breaking queries on older data.


5. When would you use Amazon EMR?

Amazon EMR (Elastic MapReduce) is a managed cluster platform that simplifies running big data frameworks like Apache Hadoop and Apache Spark. I would use it for complex data processing, machine learning, and data transformations that require distributed computing. EMR is ideal for use cases where you need fine-grained control over the cluster and the ability to run custom code.


6. Describe a serverless data ingestion pipeline.

A serverless data ingestion pipeline could use Amazon Kinesis Data Firehose to stream data directly into Amazon S3. For real-time processing, I would use AWS Lambda to transform data on the fly. AWS Glue could be used for more complex ETL jobs. The data in S3 can then be queried directly using Amazon Athena, completing the serverless data flow.


7. How do you ensure data security in an AWS data lake?

Data security is achieved through a multi-layered approach. I would use IAM roles and policies to control access to S3 buckets and other services. Data at rest would be encrypted using KMS keys, and data in transit would be secured using SSL/TLS. AWS Lake Formation would be used to manage fine-grained, column-level access control to the data.


8. What is the purpose of Amazon Athena?

Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. It is a serverless service, so you don't have to manage any infrastructure. Athena is perfect for ad-hoc queries, quick data exploration, and creating reports on large datasets in a data lake.


9. How would you handle a batch processing workload on AWS?

For batch processing, I would use a combination of services. AWS Glue would be my primary choice for ETL jobs, as it is a fully managed service that can scale on demand. AWS Step Functions can be used to orchestrate complex workflows involving multiple steps, and AWS Lambda can handle smaller, event-driven tasks. The processed data can be stored back in S3.


10. When would you use Amazon S3 as a data lake?

Amazon S3 is the foundational service for a data lake due to its durability, scalability, and cost-effectiveness. I would use it to store raw, semi-structured, and structured data. S3's flexibility allows you to store data in its native format, and its integration with other AWS services makes it a powerful and centralized hub for all your data.


11. What are the key considerations for selecting a data format for a data lake?

Key considerations include query performance, cost, and schema evolution. I would prefer columnar formats like Parquet over row-based formats like JSON or CSV. Parquet is highly compressible, leading to lower storage costs, and it allows services like Athena and Redshift Spectrum to read only the necessary columns, significantly improving query performance.


12. How do you implement data governance in a multi-account AWS environment?

Data governance in a multi-account setup requires a centralized approach. I would use AWS Lake Formation to manage a centralized data catalog and permissions across accounts. This allows me to define access policies once and apply them to all data. AWS Organizations would be used to manage accounts, and IAM roles would grant cross-account access to data.


13. What is the difference between a star schema and a snowflake schema?

A star schema is a data modeling approach where a central fact table connects to multiple dimension tables. It is simpler and de-normalized, making it good for query performance. A snowflake schema is an extension of the star schema where dimension tables are further normalized into sub-dimension tables. This reduces data redundancy but increases query complexity.


14. How would you design a real-time analytics solution for clickstream data?

For clickstream data, I would use Amazon Kinesis Data Streams to ingest the real-time data. AWS Kinesis Data Analytics could be used to perform real-time SQL queries on the streams. The processed data could then be fed into a real-time dashboard or stored in Amazon S3 for further analysis. This provides a scalable, end-to-end solution.


15. What are the benefits of using a columnar data format?

Columnar data formats, such as Parquet and ORC, store data column-by-column rather than row-by-row. This is highly efficient for analytical queries because the database only needs to read the columns that are part of the query. It also enables better data compression, which reduces storage costs and improves I/O performance.


16. How would you optimize Amazon Redshift performance?

I would optimize Redshift performance by using a well-defined distribution style for tables, such as DISTKEY or ALL. Sorting data with SORTKEY improves query performance by reducing the amount of data scanned. I would also use VACUUM and ANALYZE commands regularly to maintain data and statistics. Finally, I would ensure appropriate node sizing.


17. Explain the role of AWS Lake Formation.

AWS Lake Formation simplifies the process of setting up, securing, and managing a data lake. It helps you collect and catalog data, and define a centralized security policy. With Lake Formation, you can grant or revoke fine-grained access to specific tables or columns for different users and groups, which simplifies data governance significantly.


18. How do you approach data partitioning?

Data partitioning involves organizing data into a hierarchical directory structure based on one or more columns. This is essential for improving query performance and reducing costs. For example, I would partition data by date (year=2024/month=08/day=28) so that queries only scan relevant data. This is particularly effective in a data lake environment.


19. When would you use Amazon Redshift Serverless versus a provisioned Redshift cluster?

Amazon Redshift Serverless is ideal for unpredictable workloads, new projects, or when you want to avoid managing infrastructure. It scales automatically and you only pay for the time the cluster is running. A provisioned Redshift cluster is more suitable for stable, predictable workloads where you can optimize performance and cost by reserving capacity.


20. How would you design a data solution for a machine learning use case?

I would design a solution that provides a scalable and reliable data pipeline for ML teams. The raw data would be stored in an S3 data lake. I would use AWS Glue for feature engineering and data preparation. Amazon SageMaker could be used for model training, and the processed data would be stored in a format like Parquet for efficient access.


=====================================================

Abhishek

https://www.youtube.com/watch?v=qtkWHhikLh8

 

 

AWS Interview Questions

 

1. Explain in depth what AWS is?

 

AWS stands for Amazon Web Service. It is a group of remote computing services, which is also known asna cloud computing platform. This new dimension of cloud computing is also known as Laas or infrastructure as a service.

 

2. What are the three varieties of cloud services?

 

The three different varieties of cloud services include:

  • Computing
  • Storage
  • Networking 

3. Define Auto-scaling?

Auto-scaling is an activity that lets you dispatch advanced instances on demand.
Moreover, auto-scaling helps you to increase or decrease resource capacity according to the application.

 

4. What do you mean by AMI?

 AMI stands for Amazon Machine Image. It is a kind of template that provides you related information (an operating system, an application server, and applications) that is needed to launch the instance, which is indeed a copy of the AMI working as a virtual server in the cloud. With the help of different AMIs, you can easily launch instances.

 

5. Can you illustrate the relationship between an instance and AMI?

 With the help of just a single AMI, you can launch multiple instances and that to even different types. At the same time, an instance type is characterized by the host computer's hardware that is utilized for your instance. Each instance provides different computer and memory capabilities. Once the situation is launched, you will find it looking like a traditional host, and you can communicate with its one would with any computer.

 

6. What does geo-targeting in CloudFront mean?

 Suppose you want your business to produce and show personalized content to the audience based on their geographic location without making any changes to the URL, head straight to geo-targeting. Geo-targeting enables you to create customized content for the group of spectators of a specific geographical area, all by keeping their needs ahead.

 

7. What is AWS S3?

 S3 stands for Simple Storage Service.
AWS S3 can be utilized to store and get any amount of data at any time and the best part from anywhere on the web. The payment model for S3 is to pay as you go.

 8. How can one send a request to Amazon S3?

 You can send the request by utilizing the AWS SDK or REST API wrapper libraries. 

 9. What is a default storage class in S3?

 The standard frequency accessed is the default storage class in S3.

 10. What different storage classes accessible in Amazon S3?

 Storage class that is accessible in Amazon S3 are:

  • Amazon S3 standard
  • Amazon S3 standard infrequent access
  • Amazon S3 reduced repetition storage
  • Amazon glacier

 
11. What are the ways to encipher the data in S3?

 Three different methods will let you encipher the data in S3

  • Server-side encryption – C
  • Server-side encryption – S3
  • Server-side encryption – KMS
  •  

Also, Check Detail Blog on AWS Solution Architect Associate

 

12.On what grounds the pricing policy of the S3 is decided?

Following factors arentaken under consideration while deciding S3:

  • Transfer of data
  • Storage that is utilized
  • Number of requests made
  • Transfer acceleration
  • Storage management

 

13. What are the different types of routing policies that are available in Amazon route S3?

The various types of routing policies available are as follows:

  • latency based
  • Weighted
  • Failover
  • Simple
  • Geolocation

 

14. What is the standard size of an S3 bucket?

The maximum size of an S3 bucket is five terabytes.

 

15. Is Amazon S3 an international service?

Yes, Definitely. Amazon S3 is an international service. Its main objective is to provide an object storage facility through the web interface, and it utilizes the Amazon scalable storage infrastructure to function its global network.

 

16. What are the important differences between EBS and S3?

Here we have listedndown some of the essential differences between EBS and S3:

 

  • EBS is highly scalable, whereas S3 is less scalable.
  • EBS has blocked storage; on the other hand, S3 is object storage.
  • EBS works faster than S3, whereas S3 works slower than EBS.
  • The user can approach EBS only through the given EC2 instance,but S3 can be accessible by anyone. It is a public instance.
  • EBS supports the file system interface, whereas S3 supports the web interface.

 

17. What is the process to upgrade or downgrade a system that involves near-zero downtime?

 

With the help of these following steps, one can upgrade or downgrade a system with near-zero downtime:

 

  • Start EC2 console
  • Select the AMI operating system
  • Open an instance with a recent instance type
  • Install the updates
  • Install applications
  • Analyze the instance to check whether it is working
  • If working then expand the new instance and cover it up with the older one
  • After it is extended the system with near-zero downtime can be upgraded and downgraded

 

18. What all is included in AMI?

 

AMI includes the following:

  • A template for the root volume for the instance
  • Opening permission
  • A block mapping which helps to decide on the capacity to be attached when it gets launched.

 

19. Are there any tools or techniques available that will help one understand if you are paying more than you should be and how accurate it is?

 

With the help of these below-mentioned resources, you will know whether the amount you are paying for the resource is accurate or not:

  • Check the top services table: You will find this on the dashboard in the cost management console that will display the top five most used services. This will also demonstrate how much you are paying on the resources in question.
  • Cost explorer: With the help of cost explorer, you can see and check the usage cost for 13 months. Moreover, know the amount of the next three months too.
  • AWS budget: This lets you plan your budget efficiently. 
  • Cost allocation tags: Get to view that resource that has cost you more in a particular month. Moreover, organize and track your resource as well.

 

20. Apart from the console, is there any substitute tool available that will help me log into the cloud environment?

 

  • AWS CLI for Linux
  • Putty
  • AWS CLI for Windows
  • AWS CLI for Windows CMD
  • AWS SDK
  • Eclipse

 

21. Can you name some AWS services that are not region-specific?

 

  • IAM
  • Route 53
  • Web application firewall
  • CloudFront

 

22. Can you define EIP?

 

EIP stands for Elastic IP address. It is a static Ipv4 address that is provided by AWS to administer dynamic cloud computing services. 

 

Checkout Other Popular Categories As Well

 

AWS Interview Questions For VPC

 

23. What is VPC?

VPC stands for Virtual Private cloud. VPC enables you to open AWS resources into the world of virtual networks. With its help, network configuration, as per the users' business requirements, can be build-up and personalized.

 

24. Illustrate some security products and features that are available in VPC?

 

  • Security groups: This plays the role of the firewall for the EC2 instances and helps to control inbound and outbound traffic at the instance grade.
  • Network access control lists: This represents the role of the firewall for the subnets and helps control inbound and outbound traffic at the subnet grade.
  • Flow logs: Flow logs help apprehend incoming and the outbound traffic from the network interfaces in your VPC.

 

25. How can an Amazon VPC be monitored?

 

One can control VPC by using the following:

  • CloudWatch and CloudWatch logs
  • VPC flow logs

 

26. How many subnets can one have as per VPC?

One can have up to 200nsubnets per VPC

 

27. Provide the default table that we get when one sets up AWS VPC?

 

The list of default tables are as follows:

  • Network ACL
  • Security group
  • Route table

 

28. How can security to your VPC be controlled?

 

One can utilize security groups, network access controls (ACLs), and flow logs to administer your AWS VPC security.

 

29. Does the property of the broadcast or multicast be backed up by Amazon VPC?

No. As of now, Amazon VPI does not provide any support for broadcast or multicast process.

 

30. Explain the difference between a Domain and a Hosted Zone?

 

This is the frequently asked question.

Domain

A domain is a collection of data describing a self-contained administrative and technical unit. For example www.vinsys.com is a domain and a general DNS concept.

Hosted zone

A hosted zone is a container that holds information about how you want to route traffic on the internet for a specific domain. For example fls.vinsys.com is a hosted zone.

 

31. What are NAT gateways?

 

NAT stands for NetworknAddress Translation. NAT enables instances to associate in a private subnet with the help of the internet and other AWS services. Furthermore, NAT prohibits the internet from having a connection with the instances.

 

32. How many buckets can be set-up in AWS by default?

 

You can build-up up to 100 buckets in each AWS account by default.

 

33. How is SSH agent forwarding set-up so that you do not have to copy the key every time you log in?

 

Here are the steps to achieve the set-up for this:

  • Go to PuTTY configuration
  • Log in to category SSH --- Auth
  • Allow SSH agent forwarding to your instance.

 

AWS Interview Questions for Amazon EC2

 

34. What are the different varieties of EC2 instances based on their expenditure?

 

The three varieties ofnEC2 instances based on their cost are:

  • On-demand instance: This comes in a lesser amount but is not recommended for long term use.
  • Spot instance: This is not much expensive and can be purchased through bidding.
  • Reserved instance: This one is recommended for those who are planning to utilize an instance for a year or more.

 

35. What is the best security practice for Amazon EC2?

 

Go through the following steps for secure Amazon EC2 best practice:

  • Utilize AWS identity and access management to control access to your AWS resource.
  • Forbid access by enabling only trusted hosts or networks tonaccess ports on your instance.
  • Analyze the rules in your security groups regularly.
  • Open only permission that you need
  • Stop passport login, for instance, opened from your AMI

 

36. What are the steps to configure CloudWatch to reclaim EC2 instance?

 

Here are the steps that will help you restore EC2 instance:

  • Set up an alarm with the help of Amazon CloudWatch
  • In the alarm, go to Define alert and go to the action tab
  • Select recover this instance option

 

Other Important AWS Interview Questions

 

37. What are the various types of AMI designs?

 

The types are

  • Completely baked AMI
  • Slightly baked AMI (JeOS AMI)
  • Hybrid AMI

38. How can a user gain access to a specific bucket?

One needs to cover the below-mentioned steps to gain access:

  • Classify your instances
  • Elaborate on how licensed users can administer the specific server
  • Lockdown your tags
  • Attach your policies to IAM users

 

39. How can a current instance be added to a new Autoscaling group?

 

Have a look at then steps how you can add an existing instance to a new auto-scaling group:

  • Launch EC2 console
  • Under instances select your instance
  • Select the action, instance setting and attach to the auto-scaling group
  • Choose a new auto-scaling group
  • Adhere to this group to the instance
  • If needed edit the instance
  • After you are done, you can add the instance to a new auto-scaling group successfully.

 

40. What is SQS?

 

SQS stands for Simple Queue Service. SQS administers the message queue service. Either you can move the data or message from one application to another even though it is not in an active state. With the help of SQS, one can be sent between multiple services. 

 

41. What are the types of queues in SQS?

 

There are two types of queues in SQS:

  • Standard Queues: This type of queue provides a limitless number of transactions per second. Standard Queue is a default queue type.
  • FIFO Queues: FIFO queues ensure that the order of messages is received and is strictly conserved in the precise order that they sent.

 

42. What are the different types of instances available?

 

Below we have mentioned the following types of instances that are available:

  • General-purpose
  • Storage optimize
  • Accelerated computing
  • Computer-optimized
  • Memory-optimized

 

43. What aspects need to be considered while migrating to Amazon Web Services?

 

Have a look at the aspects that need to be taken into consideration:

  • Operational amount
  • Workforce Capacity
  • Cost evasion
  • Operational facility
  • Business quickness

 

44. What are the components of an AWS CloudFormation template?

 

YAML or JSON are the two AWS Cloud formation templates that consist of five essential elements.

  • Template framework
  • Output values
  • Data tables
  • Resources
  • File format version

 

45. What are the key pairs in AWS?

 

Secure logs in information for your virtual machine are key pairs. To associate with the instances, you can utilize the key pairs, which consist of a public key and private key.

 

46. How many Elastic IPs are granted you to set up by AWS?

 

VPC Elastic IP addresses are granted for each AWS account.

 

47. What are the advantages of auto-scaling?

 

Here are the various advantages of auto-scaling:

  • Autoscaling provides fault tolerance
  • Provides much-improved availability
  • Better cost management policy.

 

48. How can old snapshots be auto-deleted?

 

Have a look at the steps to auto-delete old snapshots:

  • For best practices snapshots needs to be taken of EBS volumes on Amazon S3
  • AWS Ops automaton is utilized to handle all the snaps naturally
  • This lets you set up, copy and delete Amazon EBS snapshots.

 

49. What are the varieties of load balancers in AWS?

 

You will come across three types of load balancers in AWS: Application load balancer Network load balancer Classic load balancer

 

50. Are there any different AWS IAM divisions that one can control?

 

With the help of AWS IAM, you can do the following: Build up and administer IAM users Build up and conduct IAM groups Administer the security accreditation of the users Build up and apply policies to provide access to AWS services and resources

 

51. What do you mean by subnet?

A subnet is a larger section of IP addresses that is divided into pieces.

 

52. What is the portrayal of AWS CloudTrail?

 

Cloud trail is a specifically framed tool that is used for logging and tracking API calls. It is also used to analyze all S3 bucket accesses.

 

53. Illustrate Amazon Elastic Cache?

 

A web service that is used to make deployments easily accessible, scale, and store data in the cloud is known as Amazon ElasticCache.

  

55. What is the duration taken by the boot time for the instance stored backed AMI?

The duration for the boot time for an Amazon instance store blackened AMI is less than 5 minutes.

 

56. Do you require an internet gateway to utilize peering connections?

EBS volumes cannot be connected to multiple instances. But you can compare various EBS volumes to a single case.

 

57. What are the different types of Cloud services?

The different types of cloud services are: Software as a service (SaaS) Data as a service (DaaS) Platform as a service (PaaS) Infrastructure as a service (IaaS)

 

58. Can you state some important features of the Amazon cloud search?

The essential features of Amazon cloud search are: Boolean searches Prefix searches Range searches Entire text search AutoComplete advice

 

59. Is vertical scaling allowed in Amazon instance?

Yes. Vertical estimation can be scaled in Amazon instance.

 

60. What are the different layers of cloud architecture, as is explained in AWS training?

Here are the different types of layers of cloud architecture: Cloud controller Cluster controller Storage controller Node controller

 

61. What do you mean by Cloud Watch?

Cloud watching is an administration tool that is available in Amazon web services. With the assistance of a cloud watch, one can administer various resources of the organization. You can easily look at multiple things like health, applications, and networks.

 

62. Can you provide few examples of the DB engine that is utilized in AWS RDS?

Here that the examples of DB engine that is used in AWS RDS: Maria DB Oracle DB MS-SQL DB MYSQL DB Postgre DB

 

63. Is region based services on all services get supported by Amazon?

 

No. Region, specific service on all services, is not provided and supported by Amazon. However, most of the services are region-specific.

 

64. Which is the cheapest AWS region?

 

The US standard is the cheapest region, but it is also the most established AWS region.

 

65. What are the advantages of AWS?

 

One of AWS's main advantages is that it provides services to its users at extremely low cost. The service is easy to utilize, and the users should not think and worry about the security factor, servers, and databases. It comes with several benefits that make the users depend upon them easily.

 

66. Which service would you utilize if you need to perform real-time administration of AWS services and get actionable insights?

 

Amazon Cloud Watch is the service that one can utilize.

 

67. Is AWS RDS available for free?

 

Yes. It is. It is a free tier. RDS is used by the AWS customers, to begin with, the management database service in the cloud for free.

 

68. State the difference between block storage and file storage?

 

Block storage works at a lower grade and helps to administer the data asset of blocks, whereas file storage functions at a higher level in the form of file or folders.

Above aws scenario-based interview questions for experienced and for freshers are just some of the examples that you can come across while appearing for big IT companies like TCS, cognizant, Infosys aws interview questions.

Vinsys is AWS Authorized Training Partner In India Check Out the Various AWS Training & certifications or connect with us to know which is most suitable to you.

 

 

https://www.projectpro.io/article/top-50-aws-interview-questions-and-answers-for-2018/399

 

1. Differentiate between on-demand instances and spot instances.

Spot Instances are spare unused Elastic Compute Cloud (EC2) instances that one can bid for. Once the bid exceeds the existing spot price (which changes in real-time based on demand and supply), the spot instance will be launched. If the spot price exceeds the bid price, the instance can go away anytime and terminate within 2 minutes of notice. The best way to decide on the optimal bid price for a spot instance is to check the price history of the last 90 days available on the AWS console. The advantage of spot instances is that they are cost-effective, and the drawback is that they can be terminated anytime. Spot instances are ideal to use when –

  • There are optional nice-to-have tasks.
  • You have flexible workloads that can run when there is enough computing capacity.
  • Tasks that require extra computing capacity to improve performance.

On-demand instances are made available whenever you require them, and you need to pay for the time you use them hourly. These instances can be released when they are no longer required and do not require any upfront commitment. The availability of these instances is guaranteed by AWS, unlike spot instances.

The best practice is to launch a couple of on-demand instances which can maintain a minimum level of guaranteed compute resources for the application and add on a few spot instances whenever there is an opportunity.

Here's what valued users are saying about ProjectPro

I come from Northwestern University, which is ranked 9th in the US. Although the high-quality academics at school taught me all the basics I needed, obtaining practical experience was a challenge. This is when I was introduced to ProjectPro, and the fact that I am on my second subscription year...

 

View All Projects

2. What is the boot time for an instance store-backed instance?

The boot time for an Amazon Instance Store -Backed AMI is usually less than 5 minutes.

3. Is it possible to vertically scale on an Amazon Instance?  If yes, how?

Following are the steps to scale an Amazon Instance vertically –

  • Spin up a larger Amazon instance than the existing one.
  • Pause the existing instance to remove the root ebs volume from the server and discard.
  • Stop the live running instance and detach its root volume.
  • Make a note of the unique device ID and attach that root volume to the new server.
  • Start the instance again.
  •  

4. Differentiate between vertical and horizontal scaling in AWS.

The main difference between vertical and horizontal scaling is how you add compute resources to your infrastructure. In vertical scaling, more power is added to the existing machine. In contrast, in horizontal scaling, additional resources are added to the system with the addition of more machines into the network so that the workload and processing are shared among multiple devices. The best way to understand the difference is to imagine retiring your Toyota and buying a Ferrari because you need more horsepower. This is vertical scaling. Another way to get that added horsepower is not to ditch the Toyota for the Ferrari but buy another car. This can be related to horizontal scaling, where you drive several cars simultaneously.

When the users are up to 100, an Amazon EC2 instance alone is enough to run the entire web application or the database until the traffic ramps up. Under such circumstances, when the traffic ramps up, it is better to scale vertically by increasing the capacity of the EC2 instance to meet the increasing demands of the application. AWS supports instances up to 128 virtual cores or 488GB RAM.

When the users for your application grow up to 1000 or more, vertical cannot handle requests, and there is a need for horizontal scaling, which is achieved through a distributed file system, clustering, and load balancing.

 

 

5. What is the total number of buckets that can be created in AWS by default?

100 buckets can be created in each of the AWS accounts. If additional buckets are required, increase the bucket limit by submitting a service limit increase.

6. Differentiate between Amazon RDS, Redshift, and Dynamo DB.

Features

Amazon RDS

Redshift

Dynamo DB

Computing Resources

Instances with 64 vCPU and 244 GB RAM

 

Nodes with vCPU and 244 GB RAM

Not specified, SaaS-Software as a Service.

Maintenance Window

30 minutes every week.

30 minutes every week.

No impact

Database Engine

MySQL, Oracle DB, SQL Server, Amazon Aurora, Postgre SQL

Redshift

NoSQL

Primary Usage Feature

Conventional Databases

Data warehouse

Database for dynamically modified data

Multi A-Z Replication

Additional Service

Manual

In-built

 

7. An organization wants to deploy a two-tier web application on AWS.  The application requires complex query processing and table joins. However, the company has limited resources and requires high availability. Which is the best configuration for the company based on the requirements?

DynamoDB deals with core problems of database storage,  scalability, management, reliability, and performance but does not have an RDBMS’s functionalities. DynamoDB does not support complex joins or query processing, or complex transactions. You can run a relational engine on Amazon RDS or Amazon EC2 for this kind of functionality.

 

8. What should be the instance’s tenancy attribute for running it on single-tenant hardware?

The instance tenancy attribute must be set to a dedicated instance, and other values might not be appropriate for this operation.

 

9. What are the important features of a classic load balancer in Amazon Elastic Compute Cloud (EC2)?

 

  • The high availability feature ensures that the traffic is distributed among Amazon EC2 instances in single or multiple availability zones. This ensures a high scale of availability for incoming traffic.
  • Classic load balancer can decide whether to route the traffic based on the health check’s results.
  • You can implement secure load balancing within a network by creating security groups in a VPC.
  • Classic load balancer supports sticky sessions, which ensures a user’s traffic is always routed to the same instance for a seamless experience.
  •  

10. What parameters will you consider when choosing the availability zone?

Performance, pricing, latency, and response time are factors to consider when selecting the availability zone.

 

11. Which instance will you use for deploying a 4-node Hadoop cluster in AWS?

We can use a c4.8x large instance or i2.large for this, but using a c4.8x will require a better configuration on the PC.

 

12. How will you bind the user session with a specific instance in ELB (Elastic Load Balancer)?

This can be achieved by enabling Sticky Session.

 

13. What are the possible connection issues you encounter when connecting to an Amazon EC2 instance?

  • Unprotected private key file
  • Server refused key
  • Connection timed out
  • No supported authentication method available
  • Host key not found,permission denied.
  • User key not recognized by the server, permission denied.
  •  

14. Can you run multiple websites on an Amazon EC2 server using a single IP address?

More than one elastic IP is required to run multiple websites on Amazon EC2.

 

15. What happens when you reboot an Amazon EC2 instance?

Rebooting an instance is just similar to rebooting a PC. You do not return to the image’s original state. However, the hard disk contents are the same as before the reboot.

 

16. How is stopping an Amazon EC2 instance different from terminating it?

Stopping an Amazon EC2 instance result in a normal shutdown being performed on the instance, and the instance is moved to a stop state. However, when an EC2 instance is terminated, it is transferred to a stopped state, and any EBS volumes attached to it are deleted and cannot be recovered.

Advanced AWS Interview Questions and Answers

Here are a few AWS Interview Questions and Answers for experienced professionals to further strengthen their knowledge of AWS services useful in cloud computing.

 

17. Mention the native AWS security logging capabilities.

AWS CloudTrail: 

This AWS service facilitates security analysis, compliance auditing, and resource change tracking of an AWS environment. It can also provide a history of AWS API calls for a particular account. CloudTrail is an essential AWS service required to understand AWS use and should be enabled in all AWS regions for all AWS accounts, irrespective of where the services are deployed. CloudTrail delivers log files and an optional log file integrity validation to a designated Amazon S3 (Amazon Simple Storage Service) bucket once almost every five minutes. When new logs have been delivered, AWS CloudTrail may be configured to send messages using Amazon Simple Notification Service (Amazon SNS). It can also integrate with AWS CloudWatch Logs and AWS Lambda for processing purposes.

AWS Config:

AWS Config is another significant AWS service that can create an AWS resource inventory, send notifications for configuration changes and maintain relationships among AWS resources. It provides a timeline of changes in resource configuration for specific services. If multiple changes occur over a short interval, then only the cumulative changes are recorded. Snapshots of changes are stored in a configured Amazon S3 bucket and can be set to send Amazon SNS notifications when resource changes are detected in AWS. Apart from tracking resource changes, AWS Config should be enabled to troubleshoot or perform any security analysis and demonstrate compliance over time or at a specific time interval.

AWS Detailed Billing Reports:

Detailed billing reports show the cost breakdown by the hour, day, or month, by a particular product or product resource, by each account in a company, or by customer-defined tags. Billing reports indicate how AWS resources are consumed and can be used to audit a company’s consumption of AWS services. AWS publishes detailed billing reports to a specified S3 bucket in CSV format several times daily. 

Amazon S3 (Simple Storage Service) Access Logs:

Amazon S3 Access logs record information about individual requests made to the Amazon S3 buckets and can be used to analyze traffic patterns, troubleshoot, and perform security and access auditing. The access logs are delivered to designated target S3 buckets on a best-effort basis. They can help users learn about the customer base, define access policies, and set lifecycle policies.

Elastic Load Balancing Access Logs:

Elastic Load Balancing Access logs record the individual requests made to a particular load balancer. They can also analyze traffic patterns, perform troubleshooting, and manage security and access auditing. The logs give information about the request processing durations. This data can improve user experiences by discovering user-facing errors generated by the load balancer and debugging any errors in communication between the load balancers and back-end web servers. Elastic Load Balancing access logs get delivered to a configured target S3 bucket based on the user requirements at five or sixty-minute intervals.

Amazon CloudFront Access Logs:

Amazon CloudFront Access logs record individual requests made to CloudFront distributions. Like the previous two access logs, Amazon CloudFront Access Logs can also be used to analyze traffic patterns, perform any troubleshooting required, and for security and access auditing. Users can use these access logs to gather insight about the customer base, define access policies, and set lifecycle policies. CloudFront Access logs get delivered to a configured S3 bucket on a best-effort basis.

Amazon Redshift Logs:

Amazon Redshift logs collect and record information concerning database connections, any changes to user definitions, and activity. The logs can be used for security monitoring and troubleshooting any database-related issues. Redshift logs get delivered to a designated S3 bucket.

Amazon Relational Database Service (RDS) Logs:

RDS logs record information on access, errors, performance, and database operation. They make it possible to analyze the security, performance, and operation of AWS-managed databases. RDS logs can be viewed or downloaded using the Amazon RDS console, the Amazon RDS API, or the AWS command-line interface. The log files may also be queried from a specific database table.

Amazon Relational Database Service (RDS) logs capture information about database access, performance, errors, and operation. These logs allow security, performance, and operation analysis of the AWS-managed databases. Customers can view, watch, or download these database logs using the Amazon RDS console, the AWS Command Line Interface, or the Amazon RDS API. the log files may also be queried by using DB engine-specific database tables. 

Amazon VPC Flow Logs:

Amazon VPC Flow logs collect information specific to the IP traffic, incoming and outgoing from the Amazon Virtual Private Cloud (Amazon VPC) network interfaces. They can be applied, as per requirements, at the VPC, subnet, or individual Elastic Network Interface level. VPC Flow log data is stored using Amazon CloudWatch Logs. To perform any additional processing or analysis, the VPC Flow log data can be exported using Amazon CloudWatch. It is recommended to enable Amazon VPC flow logs for debugging or monitoring policies that require capturing and visualizing network flow data.

Centralized Log Management Options:

Various options are available in AWS for centrally managing log data. Most of the AWS audit and access logs data are delivered to Amazon S3 buckets, which users can configure.

Consolidation of all the Amazon S3-based logs into a centralized, secure bucket makes it easier to organize, manage and work with the data for further analysis and processing. The Amazon CloudWatch logs provide a centralized service where log data can be aggregated.

 

18. What is a DDoS attack, and how can you handle it?

A Denial of Service (DoS) attack occurs when a malicious attempt affects the availability of a particular system, such as an application or a website, to the end-users. A DDoS attack or a Distributed Denial of Service attack occurs when the attacker uses multiple sources to generate the attack.DDoS attacks are generally segregated based on the layer of the Open Systems Interconnection (OSI) model that they attack. The most common DDoS attacks tend to be at the Network, Transport, Presentation, and Application layers, corresponding to layers 3, 4, 6, and 7, respectively.

 

19. What are RTO and RPO in AWS?

The Disaster Recovery (DR) Strategy involves having backups for the data and redundant workload components. RTO and RPO are objectives used to restore the workload and define recovery objectives on downtime and data loss.

Recovery Time Objective or RTO is the maximum acceptable delay between the interruption of a service and its restoration. It determines an acceptable time window during which a service can remain unavailable.

Recovery Point Objective or RPO is the maximum amount of time allowed since the last data recovery point. It is used to determine what can be considered an acceptable loss of data from the last recovery point to the service interruption. 

RPO and RTO are set by the organization using AWS and have to be set based on business needs. The cost of recovery and the probability of disruption can help an organization determine the RPO and RTO.

Upskill yourself for your dream job with industry-level big data projects with source code.

 

20. How can you automate EC2 backup by using EBS? 

AWS EC2 instances can be backed up by creating snapshots of EBS volumes. The snapshots are stored with the help of Amazon S3. Snapshots can capture all the data contained in EBS volumes and create exact copies of this data. The snapshots can then be copied and transferred into another AWS region, ensuring safe and reliable storage of sensitive data.

Before running AWS EC2 backup, it is recommended to stop the instance or detach the EBS volume that will be backed up. This ensures that any failures or errors that occur will not affect newly created snapshots.

The following steps must be followed to back up an Amazon EC2 instance:

  1. Sign in to the AWS account, and launch the AWS console.
  2. Launch the EC2 Management Console from the Services option.
  3. From the list of running instances, select the instance that has to be backed up.
  4. Find the Amazon EBS volumes attached locally to that particular instance.
  5. List the snapshots of each of the volumes, and specify a retention period for the snapshots. A snapshot has to be created of each volume too.
  6. Remember to remove snapshots that are older than the retention period.

 

21. Explain how one can add an existing instance to a new Auto Scaling group?

To add an existing instance to a new Auto Scaling group:

  1. Open the EC2 console.
  2. From the instances, select the instance that is to be added
  3. Go to Actions -> Instance Setting -> Attach to Auto Scaling Group
  4. Select a new Auto Scaling group and link this particular group to the instance.

 

 

 

Scenario-Based AWS Interview Questions

22. You have a web server on an EC2 instance. Your instance can get to the web but nobody can get to your web server. How will you troubleshoot this issue?

 

23. What steps will you perform to enable a server in a private subnet of a VPC to download updates from the web?

 

24. How will you build a self-healing AWS cloud architecture?

 

25. How will you design an Amazon Web Services cloud architecture for failure?

 

26. As an AWS solution architect, how will you implement disaster recovery on AWS?

 

27. You run a news website in the eu-west-1 region, which updates every 15 minutes. The website is accessed by audiences across the globe and uses an auto-scaling group behind an Elastic load balancer and Amazon relation database service. Static content for the application is on Amazon S3 and is distributed using CloudFront. The auto-scaling group is set to trigger a scale-up event with 60% CPU utilization. You use an extra large DB instance with 10.000 Provisioned IOPS that gives CPU Utilization of around 80% with freeable memory in the 2GB range. The web analytics report shows that the load time for the web pages is an average of 2 seconds, but the SEO consultant suggests that you bring the average load time of your pages to less than 0.5 seconds. What will you do to improve the website's page load time for your users?

 

28. How will you right-size a system for normal and peak traffic situations?

 

29. Tell us about a situation where you were given feedback that made you change your architectural design strategy.

 

30. What challenges are you looking forward to for the position as an AWS solutions architect?

 

31. Describe a successful AWS project which reflects your design and implementation experience with AWS Solutions Architecture.

 

32. How will you design an e-commerce application using AWS services?

 

33. What characteristics will you consider when designing an Amazon Cloud solution?

 

34. When would you prefer to use provisioned IOPS over Standard RDS storage?

 

35. What do you think AWS is missing from a solutions architect’s perspective?

 

36. What if Google decides to host YouTube.com on AWS? How will you design the solution architecture?


 


AWS EC2 Interview Questions and Answers

37. Is it possible to cast-off S3 with EC2 instances? If yes, how?

It is possible to cast-off S3 with EC2 instances using root approaches backed by native occurrence storage.

38. How can you safeguard EC2 instances running on a VPC?

 

AWS Security groups associated with EC2 instances can help you safeguard EC2 instances running in a VPC by providing security at the protocol and port access level. You can configure both INBOUND and OUTBOUND traffic to enable secured access for the EC2 instance.AWS security groups are much similar to a firewall-they contain a set of rules which filter the traffic coming into and out of an EC2 instance and deny any unauthorized access to EC2 instances.

39. How many EC2 instances can be used in a VPC?

There is a limit of running up to a total of 20 on-demand instances across the instance family, you can purchase 20 reserved instances and request spot instances as per your dynamic spot limit region.

40. What are some of the key best practices for security in Amazon EC2?

  • Create individual AWS IAM (Identity and Access Management) users to control access to your AWS recourses. Creating separate IAM users provides separate credentials for every user, making it possible to assign different permissions to each user based on the access requirements.
  • Secure the AWS Root account and its access keys.
  • Harden EC2  instances by disabling unnecessary services and applications by installing only necessary software and tools on EC2 instances.
  • Grant the least privileges by opening up permissions that are required to perform a specific task and not more than that. Additional permissions can be granted as required.
  • Define and review the security group rules regularly.
  • Have a well-defined, strong password policy for all users.
  • Deploy anti-virus software on the AWS network to protect it from Trojans, Viruses, etc.
  •  

41. A distributed application that processes huge amounts of data across various EC2 instances. The application is designed to recover gracefully from EC2 instance failures. How will you accomplish this in a cost-effective manner?

An on-demand or reserved instance will not be ideal in this case, as the task here is not continuous. Moreover. Launching an on-demand instance whenever work comes up makes no sense because on-demand instances are expensive. In this case, the ideal choice would be to opt for a spot instance owing to its cost-effectiveness and no long-term commitments.

AWS S3 Interview Questions and Answers

 

42. Will you use encryption for S3?

It is better to consider encryption for sensitive data on S3 as it is a proprietary technology.

 

43. How can you send a request to Amazon S3?

Using the REST API or the AWS SDK wrapper libraries, which wrap the underlying Amazon S3 REST API.

 

44. What is the difference between Amazon S3 and EBS?

 

Amazon S3

EBS

Paradigm

Object Store

Filesystem

Security

Private Key or Public Key

Visible only to your EC2

Redundancy

Across data centers

Within the data center

Performance

Fast

Superfast

 

45. Summarize the S3 Lifecycle Policy.

AWS provides a Lifecycle Policy in S3 as a storage cost optimizer. In fact, it enables the establishment of data retention policies for S3 objects within buckets. It is possible to manage data securely and set up rules so that it moves between different object classes on a dynamic basis and is removed when it is no longer required.

 

46. What does the AWS S3 object lock feature do?

We can store objects using the WORM  (write-once-read-many) approach using the S3 object lock. An S3 user can use this feature to prevent data overwriting or deletion for a set period of time or indefinitely. Several organizations typically use S3 object locks to satisfy legal requirements that require WORM storage.

AWS IAM Interview Questions and Answers

 

47. What do you understand by AWS policies?

 

In AWS, policies are objects that regulate the permissions of an entity (users, groups, or roles) or an AWS resource. In AWS, policies are saved as JSON objects. Identity-based policies, resource-based policies, permissions borders, Organizations SCPs, ACLs, and session policies are the six categories of policies that AWS offers.

 

48. What does AWS IAM MFA support mean?

MFA refers to Multi-Factor Authentication. AWS IAM MFA adds an extra layer of security by requesting a user's username and password and a code generated by the MFA device connected to the user account for accessing the AWS management console. 

 

49. How do IAM roles work?

IAM Role is an IAM Identity formed in an AWS account and granted particular authorization policies. These policies outline what each IAM (Identity and Access Management) role is allowed and prohibited to perform within the AWS account. IAM roles do not store login credentials or access keys; instead, a temporary security credential is created specifically for each role session. These are typically used to grant access to users, services, or applications that need explicit permission to use an AWS resource.

 

50. What happens if you have an AWS IAM statement that enables a principal to conduct an activity on a resource and another statement that restricts that same action on the same resource?

If more than one statement is applicable, the Deny effect always succeeds.

 

51. Which identities are available in the Principal element?

IAM roles & roles from within your AWS accounts are the most important type of identities. In addition, you can define federated users, role sessions, and a complete AWS account. AWS services like ec2, cloudtrail, or dynamodb rank as the second most significant type of principal.

Unlock the ProjectPro Learning Experience for FREE

AWS Cloud Engineer Interview Questions and Answers

 

52. If you have half of the workload on the public cloud while the other half is on local storage, what architecture will you use for this?

 

Hybrid Cloud Architecture.

53. What does an AWS Availability Zone mean?

AWS availability zones must be traversed to access the resources that AWS has to offer.  Applications will be designed effectively for fault tolerance. Availability Zones have low latency communications with one another to efficiently support fault tolerance.

 

54. What does "data center" mean for Amazon Web Services (AWS)?

According to the Amazon Web Services concept, the data center consists of the physical servers that power the offered AWS resources. Each availability zone will certainly include one or more AWS data centers to offer Amazon Web Services customers the necessary assistance and support. 

 

55. How does an API gateway (rest APIs) track user requests?

As user queries move via the Amazon API Gateway REST APIs to the underlying services, we can track and examine them using AWS X-Ray.

 

56. What distinguishes an EMR task node from a core node?

A core node comprises software components that execute operations and store data in a Hadoop Distributed File System or HDFS. There is always one core node in multi-node clusters. Software elements that exclusively execute tasks are found in task nodes. Additionally, it is optional and doesn't properly store data in HDFS.

AWS Technical Interview Questions

 

57. A content management system running on an EC2 instance is approaching 100% CPU utilization. How will you reduce the load on the EC2 instance?

This can be done by attaching a load balancer to an auto scaling group to efficiently distribute load among multiple instances.

 

58. What happens when you launch instances in Amazon VPC?

Each instance has a default IP address when launched in Amazon VPC. This approach is considered ideal when connecting cloud resources with data centers.

 

59. Can you modify the private IP address of an EC2 instance while it is running in a VPC?

It is not possible to change the primary private IP addresses. However, secondary IP addresses can be assigned, unassigned, or moved between instances at any given point.

 

60. You are launching an instance under the free usage tier from AMI, having a snapshot size of 50GB. How will you launch the instance under the free usage tier?

It is not possible to launch this instance under the free usage tier.

 

61. Which load balancer will you use to make routing decisions at the application or transport layer that supports VPC or EC2?

Classic Load Balancer.

Terraform AWS Interview Questions and Answers

 

62. What is the Terraform provider?

Terraform is a platform for managing and configuring infrastructure resources, including computer systems, virtual machines (VMs), network switches, containers, etc. An API provider is in charge of meaningful API interactions that reveal resources. Terraform works with a wide range of cloud service providers.

 

63. Can we develop on-premise infrastructure using Terraform?

It is possible to build on-premise infrastructure using Terraform. We can choose from a wide range of options to determine which vendor best satisfies our needs. 

 

64. Can you set up several providers using Terraform?

Terraform enables multi-provider deployments, including SDN management and on-premise applications like OpenStack and VMware.

 

65. What causes a duplicate resource error to be ignored during terraform application?

Here are some of the possible reasons:

  • Rebuild the resources using Terraform after deleting them from the cloud provider's API.
  • Eliminate some resources from the code to prevent Terraform from managing them.
  • Import resources into Terraform, then remove any code that attempts to copy them.
  •  

66. What is a Resource Graph in Terraform?

The resources are represented using a resource graph. You can create and modify different resources at the same time. To change the configuration of the graph, Terraform develops a strategy. It immediately creates a framework to help us identify drawbacks.

Get confident to build end-to-end projects

Access to a curated library of 250+ end-to-end industry projects with solution code, videos and tech support.

Request a demo

AWS Glue Interview Questions and Answers

 

67. How does AWS Glue Data Catalog work?

AWS Glue Data Catalog is a managed AWS service that enables you to store, annotate, and exchange metadata in the AWS Cloud. Each AWS account and region has a different set of AWS Glue Data Catalogs. It establishes a single location where several systems can store and obtain metadata to keep data in data silos and query and modify the data using that metadata. AWS Identity and Access Management (IAM) policies restrict access to the data sources managed by the AWS Glue Data Catalog.

 

68. What exactly does the AWS Glue Schema Registry do?

You can validate and control the lifecycle of streaming data using registered Apache Avro schemas by the AWS Glue Schema Registry. Schema Registry is useful for Apache Kafka, AWS Lambda, Amazon Managed Streaming for Apache Kafka (MSK), Amazon Kinesis Data Streams, Apache Flink, and Amazon Kinesis Data Analytics for Apache Flink.

Prepare for Your Next Big Data Job Interview with Kafka Interview Questions and Answers

 

69. What relationship exists between AWS Glue and AWS Lake Formation?

 

The shared infrastructure of AWS Glue, which provides serverless architecture, console controls, ETL code development, and task monitoring, is beneficial for AWS Lake Formation. 

 

70. How can AWS Glue Schema Registry keep applications highly available?

The Schema Registry storage and control layer supports the AWS Glue SLA, and the serializers and deserializers employ best-practice caching techniques to maximize client schema availability.

 

71. What do you understand by the AWS Glue database?

 

The AWS Glue Data Catalog database is a container for storing tables. You create a database when you launch a crawler or manually add a table. The database list in the AWS Glue console contains a list of all of your databases.

 

AWS Lambda Interview Questions and Answers

72. What are the best security techniques in Lambda?

In Lambda, you can find some of the best alternatives for security. When it comes to limiting access to resources, you can use Identity Access and Management. Another option that extends permissions is a privilege. Access might be restricted to unreliable or unauthorized hosts. The security group's regulations can be reviewed over time to maintain the pace. 

 

73. What does Amazon elastic block store mean?

 

It is a virtual storage area network that allows for the execution of tasks. Users do not need to worry about data loss even if a disk in the RAID is damaged because it can accept flaws easily. Elastic Block Storage allows for the provisioning and allocation of storage. It can also be linked to the API if necessary.

74. How much time can an AWS Lambda function run for?

After making the requests to AWS Lambda, the entire execution must occur within 300 seconds. Although the timeout is set at 3 seconds by default, you can change it to any value between 1 and 300 seconds.

75. Is the infrastructure that supports AWS Lambda accessible?

No, the foundation on which AWS Lambda runs is inaccessible after it begins managing the compute infrastructure on the user's behalf. It enables Lambda to carry out health checks, deploy security patches, and execute other standard maintenance.

76. Do the AWS Lambda-based functions remain operational if the code or configuration changes?

Yes. When a Lambda function is updated, there will be a limited timeframe, less than a minute—during which both the old and new versions of the function can handle requests.

AWS Interview Questions and Answers for DevOps

77. What does AWS DevOps' CodePipeline mean?

 

AWS offers a service called CodePipeline that offers continuous integration and continuous delivery features. It also offers provisions for infrastructure upgrades. The user-defined set release model protocols make it very simple to perform tasks like building, testing, and deploying after each build. 

78. How can AWS DevOps manage continuous integration and deployment?

 

The source code for an application must be stored and versioned using AWS Developer tools. The application is then built, tested, and deployed automatically using the services to an AWS instance or a local environment. When implementing continuous integration and deployment services, it is better to start with CodePipeline and use CodeBuild and CodeDeploy as necessary.

79. What role does CodeBuild play in the release process automation?

 

Setting up CodeBuild first, then connecting it directly with the AWS CodePipeline, makes it simple to set up and configure the release process. This makes it possible to add build steps continually, and as a result, AWS handles the processes for continuous integration and continuous deployment.

80. Is it possible to use Jenkins and AWS CodeBuild together with AWS DevOps?

It is simple to combine AWS CodeBuild with Jenkins to perform and execute jobs in Jenkins. Creating and manually controlling each worker node in Jenkins is no longer necessary because build jobs are pushed to CodeBuild and then executed there.

81. How does CloudFormation vary from AWS Elastic Beanstalk?

AWS Elastic BeanStalk and CloudFormation are two core services by AWS. Their architecture makes it simple for them to work together. EBS offers an environment in which cloud-deployed applications can be deployed. To manage the lifecycle of the apps, this is incorporated with CloudFormation's tools. This makes using several AWS resources quite simple. This ensures great scalability in terms of using it for various applications, from older applications to container-based solutions.

Theoretical knowledge is not enough to crack any Big Data interview. Get your hands dirty on Hadoop projects for practice and master your Big Data skills!

AWS Interview Questions and Answers for Java Developer

82. Is one Elastic IP enough for all the instances you have been running?

There are both public and private addresses for the instances. Until the Amazon EC2 or instance is terminated or disabled, the private and public addresses are still associated with them. Elastic addresses can be used in place of these addresses, and they remain with the instance as long as the user doesn't explicitly disconnect them. There will be a need for more than one Elastic IP if numerous websites are hosted on an EC2 server.

83. What networking performance metrics can you expect when launching instances in a cluster placement group?

The following factors affect network performance:

  • Type of instance
  • Network performance criteria

When instances are launched in a cluster placement group, one should expect the following:

  • Single flow of 10 Gbps.
  • 20 Gbps full-duplex
  • The network traffic will be restricted to 5 Gbps irrespective of the placement unit.

84. What can you do to increase data transfer rates in Snowball?

The following techniques can speed up data transport solution in Snowballs:

  • Execute multiple copy operations simultaneously.
  • Copy data to a single snowball from many workstations.
  • To reduce the encryption overhead, it is best to transfer large files into small batches of smaller files.
  • Removing any additional hops.

85. Consider a scenario where you want to change your current domain name registration to Amazon Route S3 without affecting your web traffic. How can you do it?

Below are the steps to transfer your domain name registration to Amazon Route S3:

  • Obtain a list of the DNS records related to your domain name.
  • Create a hosted zone using the Route 53 Management Console, which will store your domain's DNS records, and launch the transfer operation from there.
  • Get in touch with the domain name registrar you used to register. Examine the transfer processes.
  • When the registrar communicates the need for the new name server delegations, your DNS requests will be processed.

86. How do you send a request to Amazon S3?

There are different options for submitting requests to Amazon S3:

  • Use REST APIs.
  • Use AWS SDK Wrapper Libraries.

AWS Interview Questions for Testers

87. Mention the default tables you receive while establishing an AWS VPC.

When building an AWS VPC, we get the three default tables- Network ACL, Security Group, and Route table.

88. How do you ensure the security of your VPC?

To regulate the security of your VPC, use security groups, network access control lists (ACLs), and flow logs.

89. What does security group mean?

In AWS, security groups, which are essentially virtual firewalls, are used to regulate the inbound and outbound traffic to instances. You can manage traffic depending on different criteria, such as protocol, port, and source and destination.

90. What purpose does the ELB gateway load balancer endpoint serve?

 

Application servers in the service consumer virtual private cloud (VPC) and virtual appliances in that VPC are connected privately using ELB gateway load balancer endpoints.

91. How is a VPC protected by the AWS Network Firewall?

 

The stateful firewall by AWS Network firewall protects against unauthorized access to your Virtual Private Cloud (VPC) by monitoring connections and identifying protocols. This service's intrusion prevention program uses active flow inspection to detect and rectify loopholes in security using single-based detection. This AWS service employs web filtering to block known malicious URLs.

AWS Data Engineer Interview Questions and Answers

92. What type of performance can you expect from Elastic Block Storage service? How do you back it up and enhance the performance?

The performance of elastic block storage varies, i.e., it can go above the SLA performance level and drop below it. SLA provides an average disk I/O rate which can, at times, frustrate performance experts who yearn for reliable and consistent disk throughput on a server. Virtual AWS instances do not behave this way. One can back up EBS volumes through a graphical user interface like elasticfox or the snapshot facility through an API call. Also, the performance can be improved by using Linux software raid and striping across four volumes.

93. Imagine that you have an AWS application that requires 24x7 availability and can be down only for a maximum of 15 minutes. How will you ensure that the database hosted on your EBS volume is backed up?

Automated backups are the key processes as they work in the background without requiring manual intervention. Whenever there is a need to back up the data, AWS API and AWS CLI play a vital role in automating the process through scripts. The best way is to prepare for a timely backup of the EBS of the EC2 instance. The EBS snapshot should be stored on Amazon S3 (Amazon Simple Storage Service) and can be used to recover the database instance in case of any failure or downtime.

94. You create a Route 53 latency record set from your domain to a system in Singapore and a similar record to a machine in Oregon. When a user in India visits your domain, to which location will he be routed?

Assuming that the application is hosted on an Amazon EC2 instance and multiple instances of the applications are deployed on different EC2 regions. The request is most likely to go to Singapore because Amazon Route 53 is based on latency, and it routes the requests based on the location that is likely to give the fastest response possible.

95. How will you access the data on EBS in AWS?

Elastic block storage, as the name indicates, provides persistent, highly available, and high-performance block-level storage that can be attached to a running EC2 instance. The storage can be formatted and mounted as a file system, or the raw storage can be accessed directly.

96. How will you configure an instance with the application and its dependencies and make it ready to serve traffic?

You can achieve this with the use of lifecycle hooks. They are powerful as they let you pause the creation or termination of an instance so that you can sneak peek in and perform custom actions like configuring the instance, downloading the required files, and any other steps that are required to make the instance ready. Every auto-scaling group can have multiple lifecycle hooks.

Infosys AWS Interview Questions

97. What do AWS export and import mean?

 

AWS Import/Export enables you to move data across AWS (Amazon S3 buckets, Amazon EBS snapshots, or Amazon Glacier vaults) using portable storage devices.

98. What do you understand by AMI?

 

AMI refers to Amazon Machine Image. It's a template that includes the details (an operating system, an application server, and apps) necessary to start an instance. It is a replica of the AMI executing as a virtual server in the cloud. 

99. Define the relationship between an instance and AMI.

You can launch instances from a single AMI. An instance type specifies the hardware of the host computer that hosts your instance. Each type of instance offers different cloud computing and memory resources. Once an instance has been launched, it becomes a standard host and can be used in the same way as any other computer.

100. Compare AWS with OpenStack.

Services

AWS

OpenStack

User Interface

GUI-Console

API-EC2 API

CLI -Available

GUI-Console

API-EC2 API

CLI -Available

Computation

EC2

Nova

File Storage

S3

Swift

Block Storage

EBS

Cinder

Networking

IP addressing Egress, Load Balancing Firewall (DNS), VPC

IP addressing load balancing firewall (DNS)

Big Data

Elastic MapReduce

-

If you are willing to leverage v

 

 

 

Here is a list of the 50 best AWS interview questions

Q1) What is AWS?

Answer: AWS is a platform that provides on-demand resources for hosting web services, storage, networking, databases, and other resources over the internet. AWS enables you to pay for only the resources you use, making it a more cost-effective option for businesses of all sizes. 

Q2) What is S3?

Answer: S3 is the perfect storage solution for businesses of all sizes. You can store unlimited data and only pay for what you use. Plus, the payment model is highly flexible, so you can choose the plan that best suits your needs. 

Q3) What is S3 used for?

Answer: S3 is used for various of purposes, the most common of which is storage. Companies and individuals use S3 to store data that they don’t want to keep on their own computers, such as photos, videos, and documents. S3 can also be used to host websites and applications. 

Boost your earning potential with AWS expertise. Explore our certified AWS Courses for a high-paying career

Q4) What are EBS volumes?

Answer: EBS can be thought of as your very own block of elasticity. Attach it to an instance, and your data will be preserved, regardless of what happens to the instance. Plus, it’s always there when you need it – no deleting necessary!

Q5) What is stored in EBS volume?

Answer: EBS is a storage system that can be used to store persistent data for EC2 instances.  

Q6) What is auto-scaling?

Answer: Auto scaling allows you to automatically scale your instances up and down, depending on your CPU or memory utilization. There are two components to auto-scaling: auto-scaling groups and launch configurations. 

Q7) What are the types of volumes in EBS?

Answer: EBS offers a variety of volume types to suit your needs, such as: 

  • General-purpose 
  • Provisioned IOPS 
  • Magnetic 
  • Cold HDD 
  • Throughput optimized 

Q8) What are the pricing models for EC2instances?

Answer: The pricing models for EC2 instances come in all shapes and sizes like: 

  • On-demand 
  • Reserved 
  • Spot 
  • Scheduled 
  • Dedicated 

Become AWS AWS Solutions Architect in 6 weeks Book a free demo now

Q9) What are key pairs?

Answer: The security of your instances and virtual machines relies on key pairs. These pairs consist of a public-key and private-key, which are used to connect to your instances. 

Q10) Tell us the volume types for EC2 Instances.

Answer:  Instance store volumes and EBS 

Q11) What are the different types of instances?

Answer: Following are the types of instances, 

  • General-purpose 
  • Computer Optimized 
  • Storage Optimized 
  • Memory-Optimized 
  • Accelerated Computing 

Q12) What are the components of AWS?

Answer: AWS is a comprehensive cloud platform that consists of EC2, S3, Route53, EBS, Cloudwatch, and Key-Paris. 

Q13) What are the four foundational services in AWS?

Answer: Compute, storage, networking, database, and identity & access management (IAM). 

Q14) what are the ways to encrypt data in S3?

Answer: 

  • Server-Side Encryption -S3 (AES 256 encryption) 
  • Server-Side Encryption -KMS (Key Management Service) 
  • Server-Side Encryption -C (Client-Side) 

Q15) What is VPC?

Answer: A VPC (Virtual Private Cloud) is a private network in the cloud. It’s a way to create isolated networks in AWS, similar to on-premises data centers. You can create a VPC with public and private subnets and control access to resources in each subnet. You can also route traffic between subnets and create VPN connections to your on-premises network.

Build Your Career as a
AWS Solution Architect

  • Live Projects
  • Resume / Interview Preparation

Explore Programs

 

Q16) What is the difference between VPC and subnet?

Answer: A subnet is an important part of a VPC. A VPC can have all public subnets or a combination of public and private subnets. A private subnet is a subnet that doesn’t have a route to the internet gateway and can be used for VPNs. 

Q17) What are edge locations?

Answer: Edge location is the place where the best content will be cached. When a user tries to access some content, the content will be searched in the edge location. If it is not available then, the content will be made available from the original location, but a copy will be stored in the edge location to be accessed more quickly in the future. 

Q18) What are the minimum and maximum size of individual objects that you can store in S3?

Answer: The maximum size of an individual object that you can store in S3 is 5TB, and the minimum size is 0 bytes. 

Q19) What is the default storage class in S3?

Answer: Amazon S3’s default storage class is the S3 Standard. When you don’t specify a storage class, your objects will be stored in this reliable and convenient class. 

Q20) What are the parameters for S3 pricing?

Answer: The pricing model for S3 is as below, 

  • Storage used 
  • Number of requests you make 
  • Storage management 
  • Data transfer 
  • Transfer acceleration 

Q21) What is an AWS glacier?

Answer: Amazon Glacier is a low-cost cloud storage service for data that you don’t need right away. Store your data archiving and backup needs with Amazon Glacier and save! 

Q22) What is the maximum individual archive that you can store in glaciers?

Answer: You can store a maximum individual archive of up to 40 TB. 

Q23) What is the difference between S3 and Glacier?

Answer: Amazon S3 (Simple Storage Service) is one of the most popular AWS cloud storage options. It allows you to store and retrieve any amount of data at any time and from anywhere on the network. 

Amazon Glacier was built for the long-term storage and digital archiving of data, making it a great option for creating large backups for data recovery. 

Q24) What is the prerequisite to work with Cross-region replication in S3?

Answer: To use cross-region replication, you must enable versioning on both the source and destination buckets and make sure they are in different regions. 

Q25) What are the different storage classes in S3?

Answer: Following are the types of storage classes in S3, 

  • Standard frequently accessed 
  • Standard infrequently accessed 
  • One-zone infrequently accessed. 
  • Glacier 

Q26) What is a Cloudwatch?

Answer: CloudWatch is the perfect way to keep an eye on your AWS resources and applications in real-time. With CloudWatch, you can track important metrics that will help you keep your applications running smoothly. 

Q27) What are the cloudwatch metrics that are available for EC2 instances?

Answer: Disk reads, Diskwrites, CPU utilization, network packets in, network packets out, networkIn, networkOut, CPUCreditUsage, CPUCreditBalance. 

Q28) What is CloudWatch vs CloudTrail?

Answer: CloudWatch and CloudTrail provide complementary services that help you keep an eye on your AWS environment. CloudWatch focuses on the activity of AWS services and resources, reporting on their health and performance.  

CloudTrail, on the other hand, is a log of all actions that have taken place inside your AWS environment, so  that you can track changes and activity over time. 

Q29) Why do I need CloudWatch?

Answer: CloudWatch data and reports empower users to keep track of application performance, resource use, operational issues, and constraints. This helps organizations resolve technical issues and optimize operations. 

Q30) What are the types of cloudwatch?

Answer: There are two types of cloud monitoring: basic and detailed. Basic monitoring is free, while detailed monitoring incurs a charge. 

Q31) What is an AMI?

Answer: AMI is an acronym for Amazon Machine Image. AMI is a template that stores the software required to launch an Amazon instance. 

Q32) What is an EIP?

Answer: EIP is an elastic IP address that is perfect for dynamic cloud computing. When you need a static IP address for your instances, EIP is the perfect solution. 

Q33) What are reserved instances?

Answer: Reserved instances are a great way to ensure you always have a fixed capacity of EC2 instances available. In reserved instances, you’ll have to get into a contract of 1 year or 3 years. This will give you configurations, launch permission, and a block device mapping that specifies the volume to attach to the instance when it is launched. 

Q34) What is cloudfront?

Answer: Cloudfront is the perfect way to quickly and easily distribute your content. With its low latency and high data transfer speeds, Cloudfront ensures that your viewers will have a smooth and enjoyable experience. 

Q35) What are roles?

Answer: Roles provide permissions to resources in your account that you trust. With roles, you don’t need to create any usernames or passwords – you can simply work with the resources you need. 

Q36) What are the policies in AWS?

Answer: Policies are permissions that you can attach to the users that you create. 

Q37) What are AWS service-linked roles?

Answer: AWS service-linked roles provide you with an easy way to give the service the permissions it needs to call other services on your behalf. 

Q38) What are the ways to secure access S3 bucket?

Answer: There are two ways that you can control access to your S3 buckets, 

  1. ACL -Access Control List 
  2. Bucket policies 

Q39) What is VPC peering connection?

Answer: Create a VPC peering connection to connect your business’s two AWS accounts and enjoy the benefits of instances in each VPC behaving as if they are in the same network. 

Q40) How can you control the security of your VPC?

Answer: You can use security groups and NACLs to control the security of your VPC finely. 

Q41) What are the database types in RDS?

Answer: Following are the types of databases in RDS, 

  • Aurora 
  • Oracle 
  • MYSQL server 
  • Postgresql 
  • MariaDB 
  • SQL server 

Q42) What are the types of Route 53 Routing Policy?

Answer:  

  • Simple routing policy 
  • Weighted routing policy 
  • Latency routing policy 
  • Failover routing policy 
  • Geolocation routing policy 
  • Geoproximity routing policy 
  • Multivalue answer routing policy 

Q43) What is the maximum size of messages in SQS?

Answer: The maximum size of messages in SQS is 256 KB. 

Q44) What is a redshift?

Answer: Amazon redshift is a data warehouse product that is fast, powerful, and fully managed. It can handle petabytes of data in the cloud. 

Q45) What are the different types of storage gateway?

Answer: Following are the types of storage gateway. 

  • File gateway 
  • Volume gateway 
  • Tape gateway 

Q46) What are NAT gateways?

Answer: NAT stands for Network Address Translation. It provides a way for instances in a private subnet to connect to the internet while preventing the internet from initiating a connection with those instances. 

Q47) What is a snowball?

Answer: Snowball is the perfect way to transport data to and from AWS. With its source appliances, you can move large amounts of data quickly and securely, reducing your network costs and transfer times. 

Q48) What is VPC peering connection?

Answer: With VPC peering, you can connect your business’s two AWS accounts and enjoy the benefits of instances in each VPC behaving as if they are in the same network. This can help reduce costs and increase security. 

Q49) What is SNS?

Answer: SNS is a powerful web service that simplifies the process of notifications from the cloud. You can configure SNS to receive email or message notifications in a matter of minutes. 

Q50) What is the difference between SES and SNS?

Answer: SES takes care of the engineering required to ensure the delivery of their e-mails. SNS is a secure service that allows publishers and subscribers to communicate privately. 

 

 

view:

1. What are the AWS components?

Below are the AWS components:

  • Data Management and Data Transfer
  • Compute and Networking
  • Storage
  • Automation and Orchestration
  • Operations and Management
  • Visualization
  • Security and Compliance

This question often comes up early in an interview, particularly for entry-level roles. It’s a basic knowledge test, allowing you to show that you understand the fundamentals. In most cases, a simple list-based answer is sufficient, as that keeps your answer concise.

2. Which load balancer supports path-based routing?

The application load balancer supports path-based routing.

Like the question above, this is designed to test your general knowledge of AWS and cloud computing. Usually, you don’t have to provide any information beyond what’s shown above, as you’re essentially showing that you understand a critical fact.

3. What is the availability zone and region in AWS?

A region represents a separate geographic area in AWS, and availability zones are highly available data centers within each AWS region. Also, each region has multiple isolated locations known as availability zones. The code for the availability zone is its region code followed by a letter identifier. The best example is us-east-1a.

Providing the example at the end lets you add that little something extra to your response without going too far. It’s a way to highlight your expertise when you’re answering AWS cloud engineer interview questions a little bit more, which can make a difference.

4. Why is VPC needed?

VPC — or Virtual Private Cloud — is used to create a virtual network in the AWS cloud. It provides complete control over a virtual networking environment, including resource placement, connectivity, and security.

The answer above is pretty concise, but it covers most of the important points. However, if you want to take your answer to this and similar AWS engineer interview questions up a notch, consider following it up with an example from your past experience. For instance, mention a time you utilized VPC in a project and the benefits you gained from doing so.

5. What types of instances do you know?

Below are the different types of AWS instances:

  • General Purpose
  • Compute Optimized
  • Memory Optimized
  • Accelerated Computing
  • Storage Optimized

The trick with AWS data engineer interview questions such as these is the phrasing. Since the hiring manager is asking about the ones you “know,” you may want to start with the instances you’re most experienced with. Then, you can also mention ones that you’re familiar with, at least in a general sense.

6. What is auto-scaling?

Auto-scaling monitors your applications and automatically adjusts capacity to maintain a steady, predictable performance at the lowest possible cost. It makes scaling simple with recommendations that allow you to optimize performance, cost, or balance between them.

Here’s another opportunity to mention an example from your past experience. If you successfully used auto-scaling to balance cost and performance, discuss that project after you provide the definition to highlight not just your knowledge but your ability to apply it effectively.

7. What restrictions apply to AWS Lambda function code?

AWS Lambda has the following restrictions:

  • Maximum disk space — 512 MB
  • Default deployment package size — 50 MB
  • Memory range — 128 to 3008 MB
  • A function's maximum execution timeout is 15 minutes

In most cases, simply listing the restrictions is enough to answer this question well. However, you can also mention why the restrictions are in place if you want to add something extra to your response.

8. How do you trace user requests to Rest APIs (API Gateway)?

We can use AWS X-Ray to trace and analyze user requests as they travel through your Amazon API Gateway REST APIs to the underlying services.

Again, this is a question where you can follow up your basic answer with an example. That can make your response more impressive.

9. What’s the difference between Amazon S3 and EC2?

The main difference between Amazon S3 and EC2 is that S3 stores large amounts of data while EC2 runs the servers in the cloud.

Along with mentioning that point, consider relaying examples of times when you used each technology.

10. Explain how elasticity differs from scalability.

Elasticity enables you to meet dynamic changes, while scalability provides the static increase in the workload. The main purpose of elasticity is to match the resources allocated with the actual amount of resources needed at any given point in time. Scalability handles the changing needs of an application within the boundary of the infrastructure via statically adding or removing resources to meet the application's demands if needed.

The answer above is usually enough. However, time permitting, you can include examples from your work history to drive the points home.

10 AWS interview questions and answers for experienced engineers

As you advance in the field, the interview process gets increasingly challenging. Hiring managers have more expectations regarding your knowledge of big data, database management, system operations, and more. You’re more likely to engage with complex AWS technical interview questions than basic ones like those outlined in the section above.

Fortunately, you can still prepare by reviewing AWS cloud engineer interview questions and answers. They will give you a working foundation, ensuring you’re ready for the types of questions the hiring manager will likely ask.

Here are the top 10 AWS interview questions and answers for experienced engineers:

1. What are the methods to deploy the latest application package to the servers in the autoscaling group in AWS?

There are three deployment methods which are as follows:

  • Using Codepipeline — Deploy the code when the server gets created.
  • Replacing image — Deploy the code to a temporary server, take the image, and replace the image in the autoscaling group.
  • Changing the start-up script/user data in the autoscaling launch config — Pull the code and restart the application server whenever a new server is created.

If you have time, consider providing examples of when you’ve used the various deployment methods. Even if you just briefly mention them, it lets the hiring manager know you’re experienced. Plus, they can always ask follow-up questions for more details.

2. Do we need to open outbound port 80/443 on the load balancer's security group to allow return traffic from the server?

No, the outbound port is not required to be kept open.

You can leave your answer that simple if you like. However, you may want to expand on your answer and explain why that’s the case if you have the opportunity.

3. How do you encrypt the connection in transit to RDS?

You use the certificate bundle (TLS/SSL) created by RDS to encrypt the connection in transit.

In most cases, a straightforward answer is fine here, but you do have the option to expand. First, you could reference an example of a time when you tackled that task. Second, you can talk about how the certificate bundle encrypts the connection. The former highlights experience, while the second showcases knowledge, so use the approach that makes the most sense for your situation.

4. How do you enhance AWS security with best practices?

AWS security best practices are as follows:

  • Use accurate account information
  • Use MFA (multi-factor authentication)
  • Validate IAM roles
  • Rotate your keys
  • Limit security groups

Mention the best practices as a starting point. Then, discuss how they impact security, using examples of the types of activity they are meant to promote or prevent.

5. What are the advantages of Redshift in AWS?

The advantages of Redshift in AWS include:

  • Wide adoption
  • Ease of administration
  • Ideal for data lakes
  • Ease of querying
  • Columnar storage
  • Performance
  • Scalability
  • Security
  • Strong AWS ecosystem
  • Pricing

With this answer, you may want to steer away from a simple list. Instead, list some advantages and explain why they’re beneficial based on your experience. Then, present the remaining ones as a list, expanding on the last point you share to make the end of your response more compelling.

6. What are the differences between a core node and task node in EMR?

A core node contains software components that run tasks and store data in a Hadoop Distributed File System, or HDFS. Multi-node clusters have at least one core node. A task node contains software components that only run tasks. Also, it does not store data in HDFS and is technically optional.

If you have an example from your work history that you can use to demonstrate the difference, then consider doing so. Otherwise, a fact-based response is sufficient.

7. How can you speed up data transfer in Amazon Snowball?

There are several ways to speed up data transfer in Amazon Snowball, including:

  • Use the latest Mac or Linux Snowball client
  • Batch small files together
  • Perform multiple copy operations at one time
  • Copy from multiple workstations
  • Transfer directories but not files
  • Don't perform other operations on files during transfer
  • Reduce local network use
  • Eliminate unnecessary hops

Here’s another one of the AWS interview questions for cloud developers where integrating some examples into your list can make for a stronger answer. Reference a past project or workplace task you handled to outline the difference a few of the techniques above make.

8. How do you upload a file larger than 100 MB in Amazon S3?

There are two main options for uploading a file larger than 100 MB in Amazon S3: use the AWS Command Line Interface or use the AWS SDK.

If you want to make your answer more impactful, you can outline situations where one option may be better than another. Alternatively, you can discuss your past experience using the approaches.

9. Describe AWS routing policies.

Below are the AWS routing policies:

  • Simple routing policy
  • Failover routing policy
  • Geolocation routing policy
  • Geo-Proximity routing (traffic flow only) policy
  • Latency-based routing policy
  • IP-based routing policy
  • Multi-value answer routing policy
  • Weighted routing policy

With AWS cloud interview questions like this, you typically need to expand a little on each point. Since there are so many policies, just briefly explain each policy’s general purpose in one sentence. Otherwise, your response could end up being too long.

10. What are the consistency models in DynamoDB?

The consistency models in DynamoDB are as follows:

  • Eventual consistency model — maximizes the read throughput with low latency
  • Strong consistency model — provides updated data with high latency

You may want to touch on the benefits and drawbacks of each option or outline use cases for them based on your past experience to make your response more well-rounded.

 

https://flexiple.com/aws/interview-questions

 

1. Define and explain the three basic types of cloud services and the AWS products that are built based on them.

2. What is the relation between the Availability Zone and Region?

3. What is geo-targeting in CloudFront?

4. What are the steps involved in a CloudFormation Solution?

5. How do you upgrade or downgrade a system with near-zero downtime?

6. What are the tools and techniques that you can use in AWS to identify if you are paying more than you should be, and how to correct it?

7. Is there any other alternative tool to log into the cloud environment other than the console?

8. What are the native AWS Security logging capabilities?

9. You are trying to provide a service in a particular region, but you do not see the service in that region. Why is this happening, and how do you fix it?

Your engineers should not be hiring. They should be coding.

Help your team focus on what they were hired for. Flexiple will manage your entire hiring process and scale your tech team.

Hire dream developers now

10. What are the different types of virtualization in AWS, and what are the differences between them?

11. What are the differences between NAT Gateways and NAT Instances?

12. What is an Elastic Transcoder?

13. What are the core components of AWS?

14. What is an EC2 instance, and how is it different from traditional servers?

15. What is the AWS Well-Architected Framework, and why is it important?

16. What is an IAM role in AWS, and why is it used?

17. How do you secure data at rest in AWS?

18. What is AWS Lambda, and what are its key benefits

19. Can you explain the concept of an Availability Zone (AZ)?

Your engineers should not be hiring. They should be coding.

Help your team focus on what they were hired for. Flexiple will manage your entire hiring process and scale your tech team.

Hire dream developers now

20. How do you ensure high availability in AWS?

21. What is an Auto Scaling group, and why is it used?

22. Explain the difference between RDS and DynamoDB.

24. What is the difference between an EC2 instance and an AMI?

25. How do you back up data in AWS?

26. What is the AWS Shared Responsibility Model?

27. What is Amazon Route 53, and what is its primary use?

29. What is the purpose of an S3 bucket policy?

30. What is Amazon Redshift, and when would you use it?

31. What is AWS Identity and Access Management (IAM)?

Your engineers should not be hiring. They should be coding.

Help your team focus on what they were hired for. Flexiple will manage your entire hiring process and scale your tech team.

Hire dream developers now

33. What is AWS Elastic Beanstalk, and how does it simplify application deployment?

34. What is Amazon SNS, and how is it used for notifications?

35. How do you monitor AWS resources and applications?

36. What is CloudWatch?

37. Name some of the AWS services that are not region-specific.

38. How do you set up a system to monitor website metrics in real-time in AWS?

39. What is a DDoS attack, and what services can minimize them?

40. What services can be used to create a centralized logging solution?

What are the AWS Interview Questions for Intermediate and Experienced?

The AWS interview questions for intermediate and experienced candidates delve into the deeper intricacies of the AWS ecosystem. AWS interview questions for intermediate and experienced candidates explore areas such as troubleshooting a failed multi-AZ RDS failover, distinguishing between S3 storage classes like Intelligent-Tiering and One Zone-IA, and pinpointing best practices for optimizing AWS Lambda function performance.

Questions of advanced level are designed for those with substantial hands-on experience. Topics range from creating a multi-region, highly available application using AWS services, to cost-optimization strategies for data-intensive applications on EMR. Advanced developers have more than five years of dedicated AWS experience. The importance of these questions is not overstated. They gauge the depth of an individual's AWS expertise and also differentiate between beginners, intermediates, and experts. Mastery of these questions signifies that a candidate is well-prepared to handle complex AWS challenges in real-world settings.

41. What is AWS's global infrastructure and the concept of regions and availability zones?

42. Explain AWS Organizations and how it simplifies managing multiple AWS accounts.

Your engineers should not be hiring. They should be coding.

Help your team focus on what they were hired for. Flexiple will manage your entire hiring process and scale your tech team.

Hire dream developers now

43. How can cost optimization be achieved in AWS, and what tools can help?

44. Describe the AWS Well-Architected Framework and its pillars.

45. What is the purpose of AWS Trusted Advisor?

46. Compare EC2 and AWS Lambda in terms of use cases and scaling.

47. How do you choose the right EC2 instance type?

48. Differentiate between EC2 On-Demand Instances, Reserved Instances, and Spot Instances.

49. What is AWS Elastic Beanstalk, and when should it be used?

50. Explain Amazon ECS (Elastic Container Service) and its benefits.

51. Describe Amazon VPC and its role in network isolation.

52. How do you set up VPN connections to an Amazon VPC with AWS Direct Connect?

Your engineers should not be hiring. They should be coding.

Help your team focus on what they were hired for. Flexiple will manage your entire hiring process and scale your tech team.

Hire dream developers now

53. Explain AWS Security Groups and Network ACLs.

54. What is AWS IAM, and how is least privilege access enforced?

55. What is AWS WAF, and how does it protect web applications?

56. Describe Amazon S3 storage classes and their use cases.

57. Explain the differences between AWS EFS and AWS EBS.

58. How can you transfer large datasets securely to AWS using Snowball or Snowmobile?

59. Describe AWS Storage Gateway's role in hybrid cloud storage.

60. What are Amazon RDS Multi-AZ deployments, and why are they important?

61. Compare Amazon RDS, Amazon Aurora, and Amazon DynamoDB for different workloads.

62. What is Amazon Redshift, and what are its advantages for data warehousing?

Your engineers should not be hiring. They should be coding.

Help your team focus on what they were hired for. Flexiple will manage your entire hiring process and scale your tech team.

Hire dream developers now

63. How does AWS Database Migration Service (DMS) facilitate database migrations?

64. Explain Amazon Neptune's role in graph database applications.

65. How do you optimize database performance in Amazon DynamoDB?

66. What is AWS CodePipeline, and how does it enable CI/CD?

67. Describe AWS CodeBuild and AWS CodeDeploy in the CI/CD pipeline.

68. What is AWS Elastic Beanstalk's primary purpose?

69. How is AWS CloudFormation used for infrastructure as code (IAC)?

70. What are Blue-Green deployments, and how are they implemented in AWS?

What are the Advanced AWS Interview Questions?

Advanced AWS interview questions revolve around complex scenarios and real-world use cases. These questions delve into areas such as optimization strategies for AWS services, intricate AWS architecture designs, security best practices, and cost-management techniques.

Advanced AWS Questions involve Scenario-based questions that present hypothetical, real-world situations requiring the candidate to solve or suggest best practices. For instance, an interviewer ask, "How would you design a fault-tolerant system using AWS services?", or "Recommend a strategy to migrate a large database to AWS without downtime."

Scenario-based questions test a candidate's practical knowledge and ability to apply AWS solutions in real-world settings. They evaluate problem-solving skills, depth of AWS expertise, and adaptability to unforeseen challenges. Employers prioritize these qualities, ensuring candidates are able to handle on-the-job tasks effectively.

71. Explain AWS Key Management Service (KMS) and its role in data security.

72. Distinguish between AWS CloudFormation and Elastic Beanstalk for infrastructure as code (IAC).

Your engineers should not be hiring. They should be coding.

Help your team focus on what they were hired for. Flexiple will manage your entire hiring process and scale your tech team.

Hire dream developers now

73. Describe AWS Step Functions and their usage in building serverless workflows.

74. What is AWS Lambda@Edge, and when would you use it?

75. Explain AWS Direct Connect and its importance in hybrid cloud scenarios.

76. Define AWS PrivateLink and its significance for securing access to AWS services.

77. How do you set up and manage AWS VPC peering, and what challenges might arise?

78. What is Amazon Cognito, and how can it be used for user authentication and authorization?

79. Describe the use cases and architecture of Amazon Elastic File System (EFS).

80. How would you architect a highly available and scalable web application using AWS services?

What are the Technical and Non-Technical AWS Interview Questions?

Technical AWS interview questions focus primarily on specific AWS services, their applications, and best practices. Examples include queries on Amazon S3 bucket policies, Elastic Load Balancer configurations, and AWS Lambda functions. Non-technical AWS questions center around general cloud strategies, cost management, and AWS use-case scenarios.

These questions include various miscellaneous questions that cover a wide range of topics, such as AWS case studies or emerging trends in 2024. These questions are best for collaborative works, as they stimulate discussions on diverse AWS topics. Seeking new AWS innovations or services becomes crucial; hence, the importance of these miscellaneous questions is undeniable, as they keep candidates updated and adaptable.

81. Explain EC2 instance types and selection criteria.

82. Compare S3 and EBS storage options.

Your engineers should not be hiring. They should be coding.

Help your team focus on what they were hired for. Flexiple will manage your entire hiring process and scale your tech team.

Hire dream developers now

83. How does AWS Lambda work, and what are common use cases?

84. Outline the basics of an AWS Virtual Private Cloud (VPC).

85. What is AWS IAM, and how do you manage access control?

86. How do you create an AWS CloudFormation template

87. What is AWS Elastic Beanstalk, and how does it simplify deployment?

88. Explain Amazon RDS Multi-AZ deployment.

89. How do you monitor AWS resources using CloudWatch and CloudTrail?

90. Describe Amazon VPC peering and Direct Connect.

91. Describe your experience with AWS and cloud technologies.

92. How do you stay updated with AWS best practices and trends?

Your engineers should not be hiring. They should be coding.

Help your team focus on what they were hired for. Flexiple will manage your entire hiring process and scale your tech team.

Hire dream developers now

93. Share an experience addressing cost optimization in AWS.

94. Provide an example of a challenging AWS situation you resolved.

95. Explain how you collaborate in diverse teams on AWS projects.

96. How do you handle prioritizing and managing multiple AWS tasks?

97. Discuss key considerations for AWS data security and compliance.

98. Share your experience with AWS migrations or large-scale deployments.

99. How do you ensure knowledge sharing within your team on AWS solutions?

100. Explain how you communicate technical concepts to non-technical stakeholders.

What is AWS?

 

https://www.simplilearn.com/tutorials/aws-tutorial/aws-interview-questions

 

Basic AWS Interview Questions

1. Define and explain the three basic types of cloud services and the AWS products that are built based on them?

The three basic types of cloud services are:

  • Computing
  • Storage
  • Networking

Here are some of the AWS products that are built based on the three cloud service types:

Computing - These include EC2, Elastic Beanstalk, Lambda, Auto-Scaling, and Lightsat.

Storage - These include S3, Glacier, Elastic Block Storage, Elastic File System.

Networking - These include VPC, Amazon CloudFront, Route53

2. What is the relation between the Availability Zone and Region?

AWS regions are separate geographical areas, like the US-West 1 (North California) and Asia South (Mumbai). On the other hand, availability zones are the areas that are present inside the regions. These are generally isolated zones that can replicate themselves whenever required.

 

 

3. What is auto-scaling?

Auto-scaling is a function that allows you to provision and launch new instances whenever there is a demand. It allows you to automatically increase or decrease resource capacity in relation to the demand.

4. What is geo-targeting in CloudFront?

Geo-Targeting is a concept where businesses can show personalized content to their audience based on their geographic location without changing the URL. This helps you create customized content for the audience of a specific geographical area, keeping their needs in the forefront.

5. What are the steps involved in a CloudFormation Solution?

Here are the steps involved in a CloudFormation solution:

 

  1. Create or use an existing CloudFormation template using JSON or YAML format.
  2. Save the code in an S3 bucket, which serves as a repository for the code.
  3. Use AWS CloudFormation to call the bucket and create a stack on your template. 
  4. CloudFormation reads the file and understands the services that are called, their order, the relationship between the services, and provisions the services one after the other.

6. How do you upgrade or downgrade a system with near-zero downtime?

You can upgrade or downgrade a system with near-zero downtime using the following steps of migration:

  • Open EC2 console
  • Choose Operating System AMI
  • Launch an instance with the new instance type
  • Install all the updates
  • Install applications
  • Test the instance to see if it’s working
  • If working, deploy the new instance and replace the older instance
  • Once it’s deployed, you can upgrade or downgrade the system with near-zero downtime.

Take home these interview Q&As and get much more. Download the complete AWS Interview Guide here:

7. What are the tools and techniques that you can use in AWS to identify if you are paying more than you should be, and how to correct it?

You can know that you are paying the correct amount for the resources that you are using by employing the following resources:

  • Check the Top Services Table

It is a dashboard in the cost management console that shows you the top five most used services. This will let you know how much money you are spending on the resources in question.

  • Cost Explorer

There are cost explorer services available that will help you to view and analyze your usage costs for the last 13 months. You can also get a cost forecast for the upcoming three months.

  • AWS Budgets

This allows you to plan a budget for the services. Also, it will enable you to check if the current plan meets your budget and the details of how you use the services.

  • Cost Allocation Tags

This helps in identifying the resource that has cost more in a particular month. It lets you organize your resources and cost allocation tags to keep track of your AWS costs.

Learn From The Best AWS Mentors!

AWS Solutions Architect Certification TrainingExplore Program

Learn From The Best AWS Mentors!

8. Is there any other alternative tool to log into the cloud environment other than console?

These can help you log into the AWS resources:

  • Putty
  • AWS CLI for Linux
  • AWS CLI for Windows
  • AWS CLI for Windows CMD
  • AWS SDK
  • Eclipse

9. What services can be used to create a centralized logging solution?

The essential services that you can use are Amazon CloudWatch Logs, store them in Amazon S3, and then use Amazon Elastic Search to visualize them. You can use Amazon Kinesis Firehose to move the data from Amazon S3 to Amazon ElasticSearch.

 

10. What are the native AWS Security logging capabilities?

Most of the AWS services have their logging options. Also, some of them have account-level logging, like in AWS CloudTrail, AWS Config, and others. Let’s take a look at two services in specific:

AWS CloudTrail

This is a service that provides a history of the AWS API calls for every account. It lets you perform security analysis, resource change tracking, and compliance auditing of your AWS environment as well. The best part about this service is that it enables you to configure it to send notifications via AWS SNS when new logs are delivered.

AWS Config 

This helps you understand the configuration changes that happen in your environment. This service provides an AWS inventory that includes configuration history, configuration change notification, and relationships between AWS resources. It can also be configured to send information via AWS SNS when new logs are delivered.

11. What is a DDoS attack, and what services can minimize them?

DDoS is a cyber-attack in which the perpetrator accesses a website and creates multiple sessions so that the other legitimate users cannot access the service. The native tools that can help you deny the DDoS attacks on your AWS services are:

  • AWS Shield
  • AWS WAF
  • Amazon Route53
  • Amazon CloudFront
  • ELB
  • VPC

 

12. You are trying to provide a service in a particular region, but you do not see the service in that region. Why is this happening, and how do you fix it?

Not all Amazon AWS services are available in all regions. When Amazon initially launches a new service, it doesn’t get immediately published in all the regions. They start small and then slowly expand to other regions. So, if you don’t see a specific service in your region, chances are the service hasn’t been published in your region yet. However, if you want to get the service that is not available, you can switch to the nearest region that provides the services.

13. How do you set up a system to monitor website metrics in real-time in AWS?

Amazon CloudWatch helps you to monitor the application status of various AWS services and custom events. It helps you to monitor:

  • State changes in Amazon EC2
  • Auto-scaling lifecycle events
  • Scheduled events
  • AWS API calls
  • Console sign-in events

 

14. What are the different types of virtualization in AWS, and what are the differences between them?

The three major types of virtualization in AWS are: 

  • Hardware Virtual Machine (HVM)

It is a fully virtualized hardware, where all the virtual machines act separate from each other. These virtual machines boot by executing a master boot record in the root block device of your image.

  • Paravirtualization (PV)

Paravirtualization-GRUB is the bootloader that boots the PV AMIs. The PV-GRUB chain loads the kernel specified in the menu.

  • Paravirtualization on HVM

PV on HVM helps operating systems take advantage of storage and network I/O available through the host.

15. Name some of the AWS services that are not region-specific

AWS services that are not region-specific are:

  • IAM
  • Route 53
  • Web Application Firewall 
  • CloudFront

Want a Job at AWS? Find Out What It Takes

Cloud Architect Master's ProgramExplore Program

Want a Job at AWS? Find Out What It Takes

16. What are the differences between NAT Gateways and NAT Instances?

While both NAT Gateways and NAT Instances serve the same function, they still have some key differences.

 

17. What is CloudWatch?

The Amazon CloudWatch has the following features:

  • Depending on multiple metrics, it participates in triggering alarms.
  • Helps in monitoring the AWS environments like CPU utilization, EC2, Amazon RDS instances, Amazon SQS, S3, Load Balancer, SNS, etc.

18. What is an Elastic Transcoder?

To support multiple devices with various resolutions like laptops, tablets, and smartphones, we need to change the resolution and format of the video. This can be done easily by an AWS Service tool called the Elastic Transcoder, which is a media transcoding in the cloud that exactly lets us do the needful. It is easy to use, cost-effective, and highly scalable for businesses and developers.

AWS Interview Questions for Intermediate and Experienced

19. With specified private IP addresses, can an Amazon Elastic Compute Cloud (EC2) instance be launched? If so, which Amazon service makes it possible?

Yes. Utilizing VPC makes it possible (Virtual Private Cloud).

20. Define Amazon EC2 regions and availability zones?

Availability zones are geographically separate locations. As a result, failure in one zone has no effect on EC2 instances in other zones. When it comes to regions, they may have one or more availability zones. This configuration also helps to reduce latency and costs.

21. Explain Amazon EC2 root device volume?

The image that will be used to boot an EC2 instance is stored on the root device drive. This occurs when an Amazon AMI runs a new EC2 instance. And this root device volume is supported by EBS or an instance store. In general, the root device data on Amazon EBS is not affected by the lifespan of an EC2 instance.

22. Mention the different types of instances in  Amazon EC2 and explain its features.

  1. General Purpose Instances: They are used to compute a range of workloads and aid in the allocation of processing, memory, and networking resources.
  2. Compute Optimized Instances: These are ideal for compute-intensive applications. They can handle  batch processing workloads, high-performance web servers, machine learning inference, and various other tasks.
  3. Memory Optimized: They process workloads that handle massive datasets in memory and deliver them quickly.
  4. Accelerated Computing: It aids in the execution of floating-point number calculations, data pattern matching, and graphics processing. These functions are carried out using hardware accelerators.
  5. Storage Optimised: They handle tasks that require sequential read and write access to big data sets on local storage.

23. Will your standby RDS be launched in the same availability zone as your primary?

No, standby instances are launched in different availability zones than the primary, resulting in physically separate infrastructures. This is because the entire purpose of standby instances is to prevent infrastructure failure. As a result, if the primary instance fails, the backup instance will assist in recovering all of the data.

Want a Job at AWS? Find Out What It Takes

Cloud Architect Master's ProgramExplore Program

Want a Job at AWS? Find Out What It Takes

Advanced AWS Interview Questions and Answers

24. What is the difference between a Spot Instance, an On-demand Instance, and a Reserved Instance?

Spot instances are unused EC2 instances that users can use at a reduced cost.

When you use on-demand instances, you must pay for computing resources without making long-term obligations.

Reserved instances, however, allow you to specify attributes such as instance type, platform, tenancy, region, and availability zone. When instances in certain availability zones are used, reserved instances offer significant reductions and capacity reservations.

25. How would you address a situation in which the relational database engine frequently collapses when traffic to your RDS instances increases, given that the RDS instance replica is not promoted as the master instance?

A larger RDS instance type is required for handling significant quantities of traffic, as well as producing manual or automated snapshots to recover data if the RDS instance fails.

26. What do you understand by 'changing' in Amazon EC2?

To make limit administration easier for customers, Amazon EC2 now offers the option to switch from the current 'instance count-based limitations' to the new 'vCPU Based restrictions.' As a result, when launching a combination of instance types based on demand, utilization is measured in terms of the number of vCPUs.

27. Define Snapshots in Amazon Lightsail?

The point-in-time backups of EC2 instances, block storage drives, and databases are known as snapshots. They can be produced manually or automatically at any moment. Your resources can always be restored using snapshots, even after they have been created. These resources will also perform the same tasks as the original ones from which the snapshots were made.

AWS Scenario-based Questions

28. On an EC2 instance, an application of yours is active. Once the CPU usage on your instance hits 80%, you must reduce the load on it. What strategy do you use to complete the task?

It can be accomplished by setting up an autoscaling group to deploy additional instances, when an EC2 instance's CPU use surpasses 80% and by allocating traffic across instances via the creation of an application load balancer and the designation of EC2 instances as target instances.

29. Multiple Linux Amazon EC2 instances running a web application for a firm are being used, and data is being stored on Amazon EBS volumes. The business is searching for a way to provide storage that complies with atomicity, consistency, isolation, and durability while also increasing the application's resilience in the event of a breakdown (ACID). What steps should a solutions architect take to fulfill these demands?

AWS Auto Scaling groups can create an application load balancer that spans many availability zones. Mount a target on each instance and save data on Amazon EFS.

30. Your business prefers to use its email address and domain to send and receive compliance emails. What service do you recommend to implement it easily and budget-friendly?             

This can be accomplished by using Amazon Simple Email Service (Amazon SES), a cloud-based   email-sending service. 

Technical and Non-Technical AWS Interview Questions

31. Describe SES.

Amazon offers the Simple Email Service (SES) service, which allows you to send bulk emails to customers swiftly at a minimal cost.

32. Describe PaaS.

PaaS supports the operation of multiple cloud platforms, primarily for the development, testing, and  oversight of the operation of the program.

33. How many S3 buckets can be created?

Up to 100 buckets can be created by default.

34. What is the maximum limit of elastic IPs anyone can produce?

A maximum of five elastic IP addresses can be generated per location and AWS account.

AWS Questions for Amazon EC2

35. What is Amazon EC2?

EC2 is short for Elastic Compute Cloud, and it provides scalable computing capacity. Using Amazon EC2 eliminates the need to invest in hardware, leading to faster development and deployment of applications. You can use Amazon EC2 to launch as many or as few virtual servers as needed, configure security and networking, and manage storage. It can scale up or down to handle changes in requirements, reducing the need to forecast traffic. EC2 provides virtual computing environments called “instances.”

36. What Are Some of the Security Best Practices for Amazon EC2?

Security best practices for Amazon EC2 include using Identity and Access Management (IAM) to control access to AWS resources; restricting access by only allowing trusted hosts or networks to access ports on an instance; only opening up those permissions you require, and disabling password-based logins for instances launched from your AMI.

37. Can S3 Be Used with EC2 Instances, and If Yes, How?

Amazon S3 can be used for instances with root devices backed by local instance storage. That way, developers have access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of websites. To execute systems in the Amazon EC2 environment, developers load Amazon Machine Images (AMIs) into Amazon S3 and then move them between Amazon S3 and Amazon EC2.

Amazon EC2 and Amazon S3 are two of the best-known web services that make up AWS.

38. What is the difference between stopping and terminating an EC2 instance? 

While you may think that both stopping and terminating are the same, there is a difference. When you stop an EC2 instance, it performs a normal shutdown on the instance and moves to a stopped state. However, when you terminate the instance, it is transferred to a stopped state, and the EBS volumes attached to it are deleted and can never be recovered. 

39. What are the different types of EC2 instances based on their costs?

The three types of EC2 instances are:

  • On-demand Instance

It is cheap for a short time but not when taken for the long term

  • Spot Instance

It is less expensive than the on-demand instance and can be bought through bidding. 

  • Reserved Instance

If you are planning to use an instance for a year or more, then this is the right one for you.

Build and Deploy Azure Applications Like a Pro!

Azure Cloud ArchitectExplore Program

Build and Deploy Azure Applications Like a Pro!

40. How do you set up SSH agent forwarding so that you do not have to copy the key every time you log in?

Here’s how you accomplish this:

  1. Go to your PuTTY Configuration
  2. Go to the category SSH -> Auth
  3. Enable SSH agent forwarding to your instance

 

41. What are Solaris and AIX operating systems? Are they available with AWS?

Solaris is an operating system that uses SPARC processor architecture, which is not supported by the public cloud currently. 

AIX is an operating system that runs only on Power CPU and not on Intel, which means that you cannot create AIX instances in EC2.

Since both the operating systems have their limitations, they are not currently available with AWS.

42. How do you configure CloudWatch to recover an EC2 instance?

Here’s how you can configure them:

  • Create an Alarm using Amazon CloudWatch
  • In the Alarm, go to Define Alarm -> Actions tab
  • Choose Recover this instance option

 

43. What are the common types of AMI designs?

There are many types of AMIs, but some of the common AMIs are:

  • Fully Baked AMI
  • Just Enough Baked AMI (JeOS AMI)
  • Hybrid AMI

44. What are Key-Pairs in AWS?

The Key-Pairs are password-protected login credentials for the Virtual Machines that are used to prove our identity while connecting the Amazon EC2 instances. The Key-Pairs are made up of a Private Key and a Public Key which lets us connect to the instances.

AWS Interview Questions for S3

45. What is Amazon S3? 

S3 is short for Simple Storage Service, and Amazon S3 is the most supported storage platform available. S3 is object storage that can store and retrieve any amount of data from anywhere. Despite that versatility, it is practically unlimited as well as cost-effective because it is storage available on demand. In addition to these benefits, it offers unprecedented levels of durability and availability. Amazon S3 helps to manage data for cost optimization, access control, and compliance. 

Iginite Your Knowledge in AWS Cloud Operations

Cloud Operations on AWSENROLL NOW

Iginite Your Knowledge in AWS Cloud Operations

46. How can you recover/login to an EC2 instance for which you have lost the key?

Follow the steps provided below to recover an EC2 instance if you have lost the key:

  1. Verify that the EC2Config service is running
  2. Detach the root volume for the instance
  3. Attach the volume to a temporary instance
  4. Modify the configuration file
  5. Restart the original instance

47. What are some critical differences between AWS S3 and EBS?

Here are some differences between AWS S3 and EBS:

 

48. How do you allow a user to gain access to a specific bucket?

You need to follow the four steps provided below to allow access. They are:

  1. Categorize your instances
  2. Define how authorized users can manage specific servers.
  3. Lockdown your tags
  4. Attach your policies to IAM users

 

49. How can you monitor S3 cross-region replication to ensure consistency without actually checking the bucket?

Follow the flow diagram provided below to monitor S3 cross-region replication:

 

50. What is SnowBall?

To transfer terabytes of data outside and inside of the AWS environment, a small application called SnowBall is used. 

Data transferring using SnowBall is done in the following ways:

  1. A job is created.
  2. The SnowBall application is connected.
  3. The data is copied into the SnowBall application.
  4. Data is then moved to the AWS S3.

51. What are the Storage Classes available in Amazon S3?

The Storage Classes that are available in the Amazon S3 are the following:

  • Amazon S3 Glacier Instant Retrieval storage class
  • Amazon S3 Glacier Flexible Retrieval (Formerly S3 Glacier) storage class
  • Amazon S3 Glacier Deep Archive (S3 Glacier Deep Archive)
  • S3 Outposts storage class
  • Amazon S3 Standard-Infrequent Access (S3 Standard-IA)
  • Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)
  • Amazon S3 Standard (S3 Standard)
  • Amazon S3 Reduced Redundancy Storage
  • Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering)

Boost Your Cloud Skills. Land an Architect Role

Cloud Architect Master's ProgramExplore Program

Boost Your Cloud Skills. Land an Architect Role

AWS Interview Questions for VPC

52. What Is Amazon Virtual Private Cloud (VPC) and Why Is It Used?

VPC is the best way of connecting to your cloud resources from your own data center. Once you connect your datacenter to the VPC in which your instances are present, each instance is assigned a private IP address that can be accessed from your data center. That way, you can access your public cloud resources as if they were on your own private network.

53. VPC is not resolving the server through DNS. What might be the issue, and how can you fix it?

To fix this problem, you need to enable the DNS hostname resolution, so that the problem resolves itself.

54. How do you connect multiple sites to a VPC?

If you have multiple VPN connections, you can provide secure communication between sites using the AWS VPN CloudHub. Here’s a diagram that will show you how to connect various sites to a VPC:

 

55. Name and explain some security products and features available in VPC?

Here is a selection of security products and features:

  • Security groups - This acts as a firewall for the EC2 instances, controlling inbound and outbound traffic at the instance level.
  • Network access control lists - It acts as a firewall for the subnets, controlling inbound and outbound traffic at the subnet level.
  • Flow logs - These capture the inbound and outbound traffic from the network interfaces in your VPC.

56. How do you monitor Amazon VPC?

You can monitor VPC by using:

  • CloudWatch and CloudWatch logs
  • VPC Flow Logs

57. How many Subnets can you have per VPC?

We can have up to 200 Subnets per Amazon Virtual Private Cloud (VPC).

General AWS Interview Questions

58. When Would You Prefer Provisioned IOPS over Standard Rds Storage?

You would use Provisioned IOPS when you have batch-oriented workloads. Provisioned IOPS delivers high IO rates, but it is also expensive. However, batch processing workloads do not require manual intervention. 

59. How Do Amazon Rds, Dynamodb, and Redshift Differ from Each Other?

Amazon RDS is a database management service for relational databases. It manages patching, upgrading, and data backups automatically. It’s a database management service for structured data only. On the other hand, DynamoDB is a NoSQL database service for dealing with unstructured data. Redshift is a data warehouse product used in data analysis.

60. What Are the Benefits of AWS’s Disaster Recovery?

Businesses use cloud computing in part to enable faster disaster recovery of critical IT systems without the cost of a second physical site. The AWS cloud supports many popular disaster recovery architectures ranging from small customer workload data center failures to environments that enable rapid failover at scale. With data centers all over the world, AWS provides a set of cloud-based disaster recovery services that enable rapid recovery of your IT infrastructure and data.

61. How can you add an existing instance to a new Auto Scaling group?

Here’s how you can add an existing instance to a new Auto Scaling group:

  • Open EC2 console
  • Select your instance under Instances
  • Choose Actions -> Instance Settings -> Attach to Auto Scaling Group
  • Select a new Auto Scaling group
  • Attach this group to the Instance
  • Edit the Instance if needed
  • Once done, you can successfully add the instance to a new Auto Scaling group

Build Your Career as a Cloud Architect

Cloud Architect Master's ProgramExplore Program

Build Your Career as a Cloud Architect

62. What are the factors to consider while migrating to Amazon Web Services?

Here are the factors to consider during AWS migration:

  • Operational Costs - These include the cost of infrastructure, ability to match demand and supply, transparency, and others.
  • Workforce Productivity 
  • Cost avoidance
  • Operational resilience
  • Business agility

63. What is RTO and RPO in AWS?

RTO or Recovery Time Objective is the maximum time your business or organization is willing to wait for a recovery to complete in the wake of an outage. On the other hand, RPO or Recovery Point Objective is the maximum amount of data loss your company is willing to accept as measured in time.

64. If you would like to transfer vast amounts of data, which is the best option among Snowball, Snowball Edge, and Snowmobile?

AWS Snowball is basically a data transport solution for moving high volumes of data into and out of a specified AWS region. On the other hand, AWS Snowball Edge adds additional computing functions apart from providing a data transport solution. The snowmobile is an exabyte-scale migration service that allows you to transfer data up to 100 PB.

65. Explain what T2 instances are?

The T2 Instances are intended to give the ability to burst to a higher performance whenever the workload demands it and also provide a moderate baseline performance to the CPU.

The T2 instances are General Purpose instance types and are low in cost as well. They are usually used wherever workloads do not consistently or often use the CPU. 

66. What are the advantages of AWS IAM?

AWS IAM allows an administrator to provide multiple users and groups with granular access. Various user groups and users may require varying levels of access to the various resources that have been developed. We may assign roles to users and create roles with defined access levels using IAM.

It further gives us Federated Access, which allows us to grant applications and users access to resources without having to create IAM Roles.

67. Explain Connection Draining

Connection Draining is an AWS service that allows us to serve current requests on the servers that are either being decommissioned or updated.

By enabling this Connection Draining, we let the Load Balancer make an outgoing instance finish its existing requests for a set length of time before sending it any new requests. A departing instance will immediately go off if Connection Draining is not enabled, and all pending requests will fail.

68. What is Power User Access in AWS?

The AWS Resources owner is identical to an Administrator User. The Administrator User can build, change, delete, and inspect resources, as well as grant permissions to other AWS users.

Administrator Access without the ability to control users and permissions is provided to a Power User. A Power User Access user cannot provide permissions to other users but has the ability to modify, remove, view, and create resources.

Transition to Cloud Operations in Just 3 Days!

Cloud Operations on AWSENROLL NOW

Transition to Cloud Operations in Just 3 Days!

AWS Questions for CloudFormation

69. How is AWS CloudFormation different from AWS Elastic Beanstalk?

Here are some differences between AWS CloudFormation and AWS Elastic Beanstalk:

  • AWS CloudFormation helps you provision and describe all of the infrastructure resources that are present in your cloud environment. On the other hand, AWS Elastic Beanstalk provides an environment that makes it easy to deploy and run applications in the cloud.
  • AWS CloudFormation supports the infrastructure needs of various types of applications, like legacy applications and existing enterprise applications. On the other hand, AWS Elastic Beanstalk is combined with the developer tools to help you manage the lifecycle of your applications.

70. What are the elements of an AWS CloudFormation template?

AWS CloudFormation templates are YAML or JSON formatted text files that are comprised of five essential elements, they are:

  • Template parameters
  • Output values
  • Data tables
  • Resources
  • File format version

71. What happens when one of the resources in a stack cannot be created successfully?

If the resource in the stack cannot be created, then the CloudFormation automatically rolls back and terminates all the resources that were created in the CloudFormation template. This is a handy feature when you accidentally exceed your limit of Elastic IP addresses or don’t have access to an EC2 AMI.

 

AWS Questions for Elastic Block Storage

72. How can you automate EC2 backup using EBS?

Use the following steps in order to automate EC2 backup using EBS:

  1. Get the list of instances and connect to AWS through API to list the Amazon EBS volumes that are attached locally to the instance.
  2. List the snapshots of each volume, and assign a retention period of the snapshot. Later on, create a snapshot of each volume.
  3. Make sure to remove the snapshot if it is older than the retention period.

73. What is the difference between EBS and Instance Store?

EBS is a kind of permanent storage in which the data can be restored at a later point. When you save data in the EBS, it stays even after the lifetime of the EC2 instance. On the other hand, Instance Store is temporary storage that is physically attached to a host machine. With an Instance Store, you cannot detach one instance and attach it to another. Unlike in EBS, data in an Instance Store is lost if any instance is stopped or terminated.

74. Can you take a backup of EFS like EBS, and if yes, how?

Yes, you can use the EFS-to-EFS backup solution to recover from unintended changes or deletion in Amazon EFS. Follow these steps:

  1. Sign in to the AWS Management Console
  2. Click the launch EFS-to-EFS-restore button
  3. Use the region selector in the console navigation bar to select region
  4. Verify if you have chosen the right template on the Select Template page
  5. Assign a name to your solution stack
  6. Review the parameters for the template and modify them if necessary

75. How do you auto-delete old snapshots?

Here’s the procedure for auto-deleting old snapshots:

  • As per procedure and best practices, take snapshots of the EBS volumes on Amazon S3.
  • Use AWS Ops Automator to handle all the snapshots automatically.
  • This allows you to create, copy, and delete Amazon EBS snapshots.

 

Iginite Your Knowledge in AWS Cloud Operations

Cloud Operations on AWSENROLL NOW

Iginite Your Knowledge in AWS Cloud Operations

AWS Interview Questions for Elastic Load Balancing

76. What are the different types of load balancers in AWS?

Three types of load balancers are supported by Elastic Load Balancing:

  1. Application Load Balancer
  2. Network Load Balancer
  3. Classic Load Balancer

77. What are the different uses of the various load balancers in AWS Elastic Load Balancing?

Application Load Balancer

Used if you need flexible application management and TLS termination.

Network Load Balancer

Used if you require extreme performance and static IPs for your applications.

Classic Load Balancer

Used if your application is built within the EC2 Classic network

AWS Interview Questions for Security

78. What Is Identity and Access Management (IAM) and How Is It Used?

Identity and Access Management (IAM) is a web service for securely controlling access to AWS services. IAM lets you manage users, security credentials such as access keys, and permissions that control which AWS resources users and applications can access.

79. How can you use AWS WAF in monitoring your AWS applications?

AWS WAF or AWS Web Application Firewall protects your web applications from web exploitations. It helps you control the traffic flow to your applications. With WAF, you can also create custom rules that block common attack patterns. It can be used for three cases: allow all requests, prevent all requests, and count all requests for a new policy.

80. What are the different AWS IAM categories that you can control?

Using AWS IAM, you can do the following:

  • Create and manage IAM users
  • Create and manage IAM groups
  • Manage the security credentials of the users
  • Create and manage policies to grant access to AWS services and resources

81. What are the policies that you can set for your users’ passwords?

Here are some of the policies that you can set:

  • You can set a minimum length of the password, or you can ask the users to add at least one number or special characters in it.
  • You can assign requirements of particular character types, including uppercase letters, lowercase letters, numbers, and non-alphanumeric characters.
  • You can enforce automatic password expiration, prevent reuse of old passwords, and request for a password reset upon their next AWS sign in.
  • You can have the AWS users contact an account administrator when the user has allowed the password to expire. 

82. What is the difference between an IAM role and an IAM user?

The two key differences between the IAM role and IAM user are:

  • An IAM role is an IAM entity that defines a set of permissions for making AWS service requests, while an IAM user has permanent long-term credentials and is used to interact with the AWS services directly.  
  • In the IAM role, trusted entities, like IAM users, applications, or an AWS service, assume roles whereas the IAM user has full access to all the AWS IAM functionalities.

83. What are the managed policies in AWS IAM?

There are two types of managed policies; one that is managed by you and one that is managed by AWS. They are IAM resources that express permissions using IAM policy language. You can create, edit, and manage them separately from the IAM users, groups, and roles to which they are attached.

84. Can you give an example of an IAM policy and a policy summary?

Here’s an example of an IAM policy to grant access to add, update, and delete objects from a specific folder.

 

Here’s an example of a policy summary:

 

85. How does AWS IAM help your business?

IAM enables to:

  • Manage IAM users and their access - AWS IAM provides secure resource access to multiple users
  • Manage access for federated users – AWS allows you to provide secure access to resources in your AWS account to your employees and applications without creating IAM roles

Get Certifications in AWS, Azure and Google Cloud

Cloud Architect Master's ProgramExplore Program

Get Certifications in AWS, Azure and Google Cloud

 

AWS Interview Questions for Route 53

 

 

86. What Is Amazon Route 53?

Amazon Route 53 is a scalable and highly available Domain Name System (DNS). The name refers to TCP or UDP port 53, where DNS server requests are addressed.

87. What Is Cloudtrail and How Do Cloudtrail and Route 53 Work Together? 

CloudTrail is a service that captures information about every request sent to the Amazon Route 53 API by an AWS account, including requests that are sent by IAM users. CloudTrail saves log files of these requests to an Amazon S3 bucket. CloudTrail captures information about all requests. You can use information in the CloudTrail log files to determine which requests were sent to Amazon Route 53, the IP address that the request was sent from, who sent the request, when it was sent, and more.

88. What is the difference between Latency Based Routing and Geo DNS?

The Geo Based DNS routing takes decisions based on the geographic location of the request. Whereas, the Latency Based Routing utilizes latency measurements between networks and AWS data centers. Latency Based Routing is used when you want to give your customers the lowest latency possible. On the other hand, Geo Based routing is used when you want to direct the customer to different websites based on the country or region they are browsing from. 

89. What is the difference between a Domain and a Hosted Zone?

Domain

A domain is a collection of data describing a self-contained administrative and technical unit. For example, www.simplilearn.com is a domain and a general DNS concept.

Hosted zone

A hosted zone is a container that holds information about how you want to route traffic on the internet for a specific domain. For example, lms.simplilearn.com is a hosted zone.

90. How does Amazon Route 53 provide high availability and low latency?

Here’s how Amazon Route 53 provides the resources in question:

Globally Distributed Servers

Amazon is a global service and consequently has DNS services globally. Any customer creating a query from any part of the world gets to reach a DNS server local to them that provides low latency. 

Dependency

Route 53 provides a high level of dependability required by critical applications

Optimal Locations

Route 53 uses a global anycast network to answer queries from the optimal position automatically. 

AWS Interview Questions for Config

91. How does AWS config work with AWS CloudTrail?

AWS CloudTrail records user API activity on your account and allows you to access information about the activity. Using CloudTrail, you can get full details about API actions such as the identity of the caller, time of the call, request parameters, and response elements. On the other hand, AWS Config records point-in-time configuration details for your AWS resources as Configuration Items (CIs). 

You can use a CI to ascertain what your AWS resource looks like at any given point in time. Whereas, by using CloudTrail, you can quickly answer who made an API call to modify the resource. You can also use Cloud Trail to detect if a security group was incorrectly configured.

92. Can AWS Config aggregate data across different AWS accounts?

Yes, you can set up AWS Config to deliver configuration updates from different accounts to one S3 bucket, once the appropriate IAM policies are applied to the S3 bucket.

AWS Interview Questions for Database

93. How are reserved instances different from on-demand DB instances?

Reserved instances and on-demand instances are the same when it comes to function. They only differ in how they are billed.

Reserved instances are purchased as one-year or three-year reservations, and in return, you get very low hourly based pricing when compared to the on-demand cases that are billed on an hourly basis.

94. Which type of scaling would you recommend for RDS and why?

There are two types of scaling - vertical scaling and horizontal scaling. Vertical scaling lets you vertically scale up your master database with the press of a button. A database can only be scaled vertically, and there are 18 different instances in which you can resize the RDS. On the other hand, horizontal scaling is good for replicas. These are read-only replicas that can only be done through Amazon Aurora.

95. What is a maintenance window in Amazon RDS? Will your DB instance be available during maintenance events?

RDS maintenance window lets you decide when DB instance modifications, database engine version upgrades, and software patching have to occur. The automatic scheduling is done only for patches that are related to security and durability. By default, there is a 30-minute value assigned as the maintenance window and the DB instance will still be available during these events though you might observe a minimal effect on performance.

96. What are the consistency models in DynamoDB?

There are two consistency models In DynamoDB. First, there is the Eventual Consistency Model, which maximizes your read throughput. However, it might not reflect the results of a recently completed write. Fortunately, all the copies of data usually reach consistency within a second. The second model is called the Strong Consistency Model. This model has a delay in writing the data, but it guarantees that you will always see the updated data every time you read it. 

97. What type of query functionality does DynamoDB support?

DynamoDB supports GET/PUT operations by using a user-defined primary key. It provides flexible querying by letting you query on non-primary vital attributes using global secondary indexes and local secondary indexes.

Iginite Your Knowledge in AWS Cloud Operations

Cloud Operations on AWSENROLL NOW

Iginite Your Knowledge in AWS Cloud Operations

AWS Interview Questions - Short Answer Questions 

1. Suppose you are a game designer and want to develop a game with single-digit millisecond latency, which of the following database services would you use?

Amazon DynamoDB

2. If you need to perform real-time monitoring of AWS services and get actionable insights, which services would you use?

Amazon CloudWatch

3. As a web developer, you are developing an app, targeted primarily for the mobile platform. Which of the following lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily?

Amazon Cognito

4. You are a Machine Learning Engineer who is on the lookout for a solution that will discover sensitive information that your enterprise stores in AWS and then use NLP to classify the data and provide business-related insights. Which among the services would you choose?

AWS Macie

5. You are the system administrator in your company, which is running most of its infrastructure on AWS. You are required to track your users and keep tabs on how they are being authenticated. You wish to create and manage AWS users and use permissions to allow and deny their access to AWS resources. Which of the following services suits you best?

AWS IAM

6. Which service do you use if you want to allocate various private and public IP addresses to make them communicate with the internet and other instances?

Amazon VPC

7. This service provides you with cost-efficient and resizable capacity while automating time-consuming administration tasks

Amazon Relational Database Service

8. Which of the following is a means for accessing human researchers or consultants to help solve problems on a contractual or temporary basis?

Amazon Mechanical Turk

9. This service is used to make it easy to deploy, manage, and scale containerized applications using Kubernetes on AWS. Which of the following is this AWS service?

Amazon Elastic Container Service

10. This service lets you run code without provisioning or managing servers. Select the correct service from the below options

AWS Lambda

11. As an AWS Developer, using this pay-per-use service, you can send, store, and receive messages between software components. Which of the following is it?

Amazon Simple Queue Service

12. Which service do you use if you would like to host a real-time audio and video conferencing application on AWS, this service provides you with a secure and easy-to-use application?

Amazon Chime

13. As your company's AWS Solutions Architect, you are in charge of designing thousands of similar individual jobs. Which of the following services best meets your requirements?

AWS Batch

Want a Job at AWS? Find Out What It Takes

Cloud Architect Master's ProgramExplore Program

Want a Job at AWS? Find Out What It Takes

AWS Interview Questions - Multiple-Choice

1. Suppose you are a game designer and want to develop a game with single-digit millisecond latency, which of the following database services would you use?

  1. Amazon RDS
  2. Amazon Neptune
  3. Amazon Snowball
  4. Amazon DynamoDB

2. If you need to perform real-time monitoring of AWS services and get actionable insights, which services would you use?

  1. Amazon Firewall Manager
  2. Amazon GuardDuty
  3. Amazon CloudWatch
  4. Amazon EBS

3. As a web developer, you are developing an app, targeted especially for the mobile platform. Which of the following lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily?

  1. AWS Shield
  2. AWS Macie
  3. AWS Inspector
  4. Amazon Cognito

4. You are a Machine Learning Engineer who is on the lookout for a solution that will discover sensitive information that your enterprise stores in AWS and then use NLP to classify the data and provide business-related insights. Which among the services would you choose?

  1. AWS Firewall Manager
  2. AWS IAM
  3. AWS Macie
  4. AWS CloudHSM

5. You are the system administrator in your company, which is running most of its infrastructure on AWS. You are required to track your users and keep tabs on how they are being authenticated. You wish to create and manage AWS users and use permissions to allow and deny their access to AWS resources. Which of the following services suits you best?

  1. AWS Firewall Manager
  2. AWS Shield
  3. Amazon API Gateway
  4. AWS IAM

6. Which service do you use if you want to allocate various private and public IP addresses in order to make them communicate with the internet and other instances?

  1. Amazon Route 53
  2. Amazon VPC
  3. Amazon API Gateway
  4. Amazon CloudFront

7. This service provides you with cost-efficient and resizable capacity while automating time-consuming administration tasks

  1. Amazon Relational Database Service
  2. Amazon Elasticache
  3. Amazon VPC
  4. Amazon Glacier

8. Which of the following is a means for accessing human researchers or consultants to help solve problems on a contractual or temporary basis?

  1. Amazon Mechanical Turk
  2. Amazon Elastic Mapreduce
  3. Amazon DevPay
  4. Multi-Factor Authentication

9. This service is used to make it easy to deploy, manage, and scale containerized applications using Kubernetes on AWS. Which of the following is this AWS service?

  1. Amazon Elastic Container Service
  2. AWS Batch
  3. AWS Elastic Beanstalk
  4. Amazon Lightsail

10. This service lets you run code without provisioning or managing servers. Select the correct service from the below options

  1. Amazon EC2 Auto Scaling
  2. AWS Lambda
  3. AWS Batch
  4. Amazon Inspector

11. As an AWS Developer, using this pay-per-use service, you can send, store and receive messages between software components. Which of the following is it?

  1. AWS Step Functions
  2. Amazon MQ
  3. Amazon Simple Queue Service
  4. Amazon Simple Notification Service

12. Which service do you use if you would like to host real-time audio and video conferencing application on AWS, this service provides you with a secure and easy-to-use application?

  1. Amazon Chime
  2. Amazon WorkSpaces
  3. Amazon MQ
  4. Amazon AppStream

13. As your company's AWS Solutions Architect, you are in charge of designing thousands of similar individual jobs. Which of the following services best meets your requirements?

  1. AWS EC2 Auto Scaling
  2. AWS Snowball
  3. AWS Fargate
  4. AWS Batch

14. You are a Machine Learning engineer and you are looking for a service that helps you build and train Machine Learning models in AWS. Which among the following are we referring to?

  1. Amazon SageMaker
  2. AWS DeepLens
  3. Amazon Comprehend
  4. Device Farm

15. Imagine that you are working for your company's IT team. You are assigned to adjusting the capacity of AWS resources based on the incoming application and network traffic. How would you do it?

  1. Amazon VPC
  2. AWS IAM
  3. Amazon Inspector
  4. Amazon Elastic Load Balancing

Want a Job at AWS? Find Out What It Takes

Cloud Architect Master's ProgramExplore Program

Want a Job at AWS? Find Out What It Takes

16. This cross-platform video game development engine that supports PC, Xbox, Playstation, iOS, and Android platforms allows developers to build and host their games on Amazon's servers.

  1. Amazon GameLift
  2. AWS Greengrass
  3. Amazon Lumberyard
  4. Amazon Sumerian

17. You are the Project Manager of your company's Cloud Architects team. You are required to visualize, understand and manage your AWS costs and usage over time. Which of the following services works best?

  1. AWS Budgets
  2. AWS Cost Explorer
  3. Amazon WorkMail
  4. Amazon Connect

18. You are the chief Cloud Architect at your company. How can you automatically monitor and adjust computer resources to ensure maximum performance and efficiency of all scalable resources?

  1. AWS CloudFormation 
  2. AWS Aurora
  3. AWS Auto Scaling
  4. Amazon API Gateway

19. As a database administrator. you will employ a service that is used to set up and manage databases such as MySQL, MariaDB, and PostgreSQL. Which service are we referring to?

  1. Amazon Aurora
  2. AWS RDS
  3. Amazon Elasticache
  4. AWS Database Migration Service

20. A part of your marketing work requires you to push messages onto Google, Facebook, Windows, and Apple through APIs or AWS Management Console. Which of the following services do you use?

  1. AWS CloudTrail
  2. AWS Config
  3. Amazon Chime
  4. AWS Simple Notification Service

 

Basic AWS Interview Questions for Freshers

1. What is AWS?

AWS (Amazon Web Services) is a platform to provide secure cloud services, database storage, offerings for computing power, content delivery, and other services to help the business scale and develop.

 

2. Give the comparison between AWS and OpenStack.

Criteria  AWS       OpenStack

License  Amazon proprietary        Open-source

Operating system             Provided as per the cloud administrator AMIs provided by AWS

Performing repeatable operations             Through templates          Through text files

Want to learn the basics of AWS Cloud Solutions? Check out our AWS Certification Training Course!

3. What is the importance of buffer in Amazon Web Services?

An Elastic Load Balancer ensures that the incoming traffic is distributed optimally across various AWS instances. A buffer will synchronize different components and make the arrangement also elastic to a load or traffic burst. The components are prone to working in an unstable way of receiving and processing requests. The buffer creates an equilibrium linking various apparatus and crafts them to work at an identical rate to supply more rapid services.

 

4. How are Spot Instances, On-demand Instances, and Reserved Instances different from one another?

Both Spot Instances and On-demand Instances are pricing models.

 

Spot Instance     On-demand Instance

With Spot Instance, customers can purchase compute capacity with no upfront commitment at all.             With On-demand Instances, users can launch instances at any time based on the demand.

Spot Instances are spare Amazon instances that you can bid for.  On-demand Instances are suitable for the high-availability needs of applications.

When the bidding price exceeds the spot price, the instance is automatically launched, and the spot price fluctuates based on supply and demand for instances.          On-demand Instances are launched by users only with the pay-as-you-go model.

When the bidding price is less than the spot price, the instance is immediately taken away by Amazon.     On-demand Instances will remain persistent without any automatic termination from Amazon.

Spot Instances are charged on an hourly basis.    On-demand Instances are charged on a per-second basis.

 

Ready to tackle the most common interview question?

Take a quick Quiz to check it out

Take a Quiz

5. Why do we make subnets?

Creating subnets means dividing a large network into smaller ones. These subnets can be created for several reasons. For example, creating and using subnets can help reduce congestion by making sure that the traffic destined for a subnet stays in that subnet. This helps in efficiently routing the to the network, which reduces the network’s load.

 

Learn more about Amazon Web Services from our AWS Tutorial!

 

Watch this video on AWS Interview Questions for Beginners:

 

 

6. Is there a way to upload a file that is greater than 100 megabytes on Amazon S3?

Yes, it is possible by using the multipart upload utility from AWS. With the multipart upload utility, larger files can be uploaded in multiple parts that are uploaded independently. You can also decrease upload time by uploading these parts in parallel. After the upload is done, the parts will be merged into a single object or file to create the original file from which the parts were created.

 

To learn more about the Amazon S3 bucket, read the blog.

7. What is the maximum number of S3 buckets you can create?

The maximum number of S3 buckets that can be created is 100.

 

Get 100% Hike!

 

Master Most in Demand Skills Now !

 

Email Address

 

+91  IN          INDIA

Phone Number

By providing your contact details, you agree to our Terms of Use & Privacy Policy

8. How can you save the data on root volume on an EBS-backed machine?

We can save the data by overriding the terminate option.

 

9. When should you use the classic load balancer and the application load balancer?

The classic load balancer is used for simple load balancing of traffic across multiple EC2 instances.

 

Classic Load Balancer

While the application load balancing is used for more intelligent load balancing, based on the multi-tier architecture or container-based architecture of the application. Application load balancing is mostly used when there is a need to route traffic to multiple services.

 

Classic Load Balancer

Want to learn about AWS DevOps? Check out our blog on What is AWS DevOps.

 

10. How many total VPCs per account/region and subnets per VPC can you have?

We can have a total of 5 VPCs for every account/region and 200 subnets for every VPC that you have.

 

11. Your organization has decided to have all its workloads on the public cloud. But, due to certain security concerns, your organization has decided to distribute some of the workload on private servers. You are asked to suggest a cloud architecture for your organization. What will your suggestion be?

A hybrid cloud. The hybrid cloud architecture is where an organization can use the public cloud for shared resources and the private cloud for confidential workloads.

 

12. Which one of the storage solutions offered by AWS would you use if you need extremely low pricing and data archiving?

AWS Glacier is an extremely low-cost storage service offered by Amazon that is used for data archiving and backup purposes. The longer you store data in Glacier, the less it will cost you.

 

Are you interested in learning AWS from experts? Enroll in our AWS Course in Bangalore and be a master of it!

 

13. You have connected four instances to ELB. To automatically terminate your unhealthy instances and replace them with new ones, which functionality would you use?

Auto-scaling groups

95% learner satisfaction score post completion of the program*

 

500% salary hike received by a working professional post completion of the course*

 

Fresher earned 30 LPA salary package on completion of the course*

 

53% of learners received 50% and above salary hike post completion of the program*

 

85% of the learners achieved their training objectives within 9 months of course completion*

 

95% learner satisfaction score post completion of the program*

 

500% salary hike received by a working professional post completion of the course*

 

Fresher earned 30 LPA salary package on completion of the course*

 

53% of learners received 50% and above salary hike post completion of the program*

 

85% of the learners achieved their training objectives within 9 months of course completion*

 

95% learner satisfaction score post completion of the program*

 

Process Advisors

 

ey-logo

*Subject to Terms and Condition

14. The data on the root volumes of store-backed and EBS-backed instances get deleted by default when they are terminated. If you want to prevent that from happening, which instance would you use?

EBS-backed instances. EBS-backed instances use EBS volume as their root volume. EBS volume consists of virtual drives that can be easily backed up and duplicated by snapshots.

 

EBS Backed Instances

The biggest advantage of EBS-backed volumes is that the data can be configured to be stored for later retrieval even if the virtual machine or the instances are shut down.

 

15. How will you configure an Amazon S3 bucket to serve static assets for your public web application?

By configuring the bucket policy to provide public read access to all objects

 

That is all we have in our section on basic Amazon Web Services interview questions section.

 

Intermediate AWS Interview Questions and Answers

16. Your organization wants to send and receive compliance emails to its clients using its email address and domain. What service would you suggest for achieving the same easily and cost-effectively?

Amazon Simple Email Service (Amazon SES), which is a cloud-based email-sending service, can be used for this purpose.

 

17. Can you launch Amazon Elastic Compute Cloud (EC2) instances with predetermined private IP addresses? If yes, then with which Amazon service is it possible?

Yes. It is possible by using VPC (Virtual Private Cloud).

 

Looking for the Perfect Job Interview Attire? Worry Not. Read our perfect guide on Interview Outfits to land your dream job.

 

18. If you launch a standby RDS, will it be launched in the same availability zone as your primary?

No, standby instances are automatically launched in different availability zones than the primary, making them physically independent infrastructures. This is because the whole purpose of standby instances is to prevent infrastructure failure. So, in case the primary goes down, the standby instance will help recover all of the data.

 

19. What is the name of Amazon's Content Delivery Network?

Amazon CloudFront

 

20. Which Amazon solution will you use if you want to accelerate moving petabytes of data in and out of AWS, using storage devices that are designed to be secure for data transfer?

Amazon Snowball. AWS Snowball is the data transport solution for large amounts of data that need to be moved into and out of AWS using physical storage devices.

 

21. If you are running your DB instance as a Multi-AZ deployment, can you use standby DB instances along with your primary DB instance?

No, the standby DB instance cannot be used along with the primary DB instances since the standby DB instances are supposed to be used only if the primary instance goes down.

 

Interested in learning AWS? Enroll in our AWS Training in Pune!

 

22. Your organization is developing a new multi-tier web application in AWS. Being a fairly new and small organization, there’s limited staff. However, the organization requires high availability. This new application comprises complex queries and table joins. Which Amazon service will be the best solution for your organization’s requirements?

DynamoDB will be the right choice here since it is designed to be highly scalable, more than RDS or any other relational database service.

 

23. You accidentally stopped an EC2 instance in a VPC with an associated Elastic IP. If you start the instance again, what will be the result?

Elastic IP will only be disassociated from the instance if it’s terminated. If it’s stopped and started, there won’t be any change to the instance, and no data will be lost.

 

24. Your organization has around 50 IAM users. Now, it wants to introduce a new policy that will affect the access permissions of an IAM user. How can it implement this without having to apply the policy at the individual user level?

It is possible to use AWS IAM groups, by adding users in the groups as per their roles and by simply applying the policy to the groups.

 

Get certified from the top AWS Training in Noida now!

 

Advanced AWS Interview Questions for Experienced

25. Your organization is using DynamoDB for its application. This application collects data from its users every 10 minutes and stores it in DynamoDB. Then every day, after a particular time interval, the data (respective of each user) is extracted from DynamoDB and sent to S3. Then, the application visualizes this data for the users. You are asked to propose a solution to help optimize the backend of the application for latency at a lower cost. What would you recommend?

ElastiCache. Amazon ElastiCache is a caching solution offered by Amazon.

 

Elastic Cache

It can be used to store a cached version of the application in a region closer to users so that when requests are made by the users the cached version of the application can respond, and hence latency will be reduced.

 

Become a master of AWS by going through this online AWS Course in Coimbatore!

 

26. I created a web application with autoscaling. I observed that the traffic on my application is the highest on Wednesdays and Fridays between 9 AM and 7 PM. What would be the best solution for me to handle the scaling?

Configure a policy in autoscaling to scale as per the predictable traffic patterns.

 

27. How would you handle a situation where the relational database engine crashes often whenever the traffic to your RDS instances increases, given that the replica of the RDS instance is not promoted as the master instance?

A bigger RDS instance type needs to be opted for handling large amounts of traffic and creating manual or automated snapshots to recover data in case the RDS instance goes down.

 

Learn more about AWS from this AWS Training in Chennai to get ahead in your career!

 

28. You have an application running on your Amazon EC2 instance. You want to reduce the load on your instance as soon as the CPU utilization reaches 100 percent. How will you do that?

It can be done by creating an autoscaling group to deploy more instances when the CPU utilization exceeds 100 percent and distributing traffic among instances by creating a load balancer and registering the Amazon EC2 instances with it.

 

Watch this video on Free AWS Full Course:

 

 

29. What would I have to do if I wanted to access Amazon Simple Storage buckets and use the information for access audits?

AWS CloudTrail can be used in this case as it is designed for logging and tracking API calls, and it has also been made available for storage solutions.

 

Learn the complete concepts of AWS through our AWS Training in Hyderabad in 26 hours!

 

30. I created a key in the North Virginia region to encrypt my data in the Oregon region. I also added three users to the key and an external AWS account. Then, to encrypt an object in S3, when I tried to use the same key, it was not listed. Where did I go wrong?

The data and the key should be in the same region. That is, the data that has to be encrypted should be in the same region as the one in which the key was created. In this case, the data is in the Oregon region, whereas the key was created in the North Virginia region.

 

31. Suppose, you hosted an application on AWS that lets the users render images and do some general computing. Which of the below-listed services can you use to route the incoming user traffic?

Classic Load Balancer

Application Load Balancer

Network Load balancer

Application Load Balancer: It supports path-based routing of the traffic and hence helps in enhancing the performance of the application structured as smaller services.

 

Application Load Balancer

Using an application load balancer, the traffic can be routed based on the requests made. In this case scenario, the traffic where requests are made for rendering images can be directed to the servers only deployed for rendering images, and the traffic where requests are made for computing can be directed to the servers deployed only for general computing purposes.

 

32. Suppose, I created a subnet and launched an EC2 instance in the subnet with default settings. Which of the following options will be ready to use on the EC2 instance as soon as it is launched?

Elastic IP

Private IP

Public IP

Internet Gateway

Private IP. Private IP is automatically assigned to the instance as soon as it is launched. While elastic IP has to be set manually, Public IP needs an Internet Gateway which again has to be created since it’s a new VPC.

 

Cloud Computing EPGC IITR iHUB

 

33. Your organization has four instances for production and another four for testing. You are asked to set up a group of IAM users that can only access the four production instances and not the other four testing instances. How will you achieve this?

We can achieve this by defining tags on the test and production instances and then adding a condition to the IAM policy that allows access to specific tags.

 

Go through the AWS Training in India to get a clear understanding of AWS!

 

34. Your organization wants to monitor the read-and-write IOPS for its AWS MySQL RDS instance and then send real-time alerts to its internal operations team. Which service offered by Amazon can help your organization achieve this scenario?

Amazon CloudWatch would help us achieve this. Since Amazon CloudWatch is a monitoring tool offered by Amazon, it’s the right service to use in the above-mentioned scenario.

 

35. Which of the following services can be used if you want to capture client connection information from your load balancer at a particular time interval?

Enabling access logs on your load balancer

Enabling CloudTrail for your load balancer

Enabling CloudWatch metrics for your load balancer

Enabling CloudTrail for your load balancer. AWS CloudTrail is an inexpensive log monitoring solution provided by Amazon. It can provide logging information for load balancers or any other AWS resources. The provided information can be further used for analysis.

 

Learn more about AWS CloudWatch in the blog by Intellipaat.

 

36. You have created a VPC with private and public subnets. In what kind of subnet would you launch the database servers?

Database servers should be ideally launched on private subnets. Private subnets are ideal for the backend services and databases of all applications since they are not meant to be accessed by the users of the applications, and private subnets are not routable from the Internet.

 

37. Is it possible to switch from an Instance-backed root volume to an EBS-backed root volume at any time?

No, it is not possible.

 

38. Can you change the instance type of the instances that are running in your application tier and are also using autoscaling? If yes, then how? (Choose one of the following)

Yes, by modifying autoscaling launch configuration

Yes, by modifying autoscaling tags configuration

Yes, by modifying autoscaling policy configuration

No, it cannot be changed

Yes, by modifying the autoscaling launch configuration

Yes, by modifying the autoscaling tag configuration

Yes, by modifying the autoscaling policy configuration

No, it cannot be changed

Yes, the instance type of such instances can be changed by modifying the autoscaling launch configuration. The tags configuration is used to add metadata to the instances.

 

Do you know about the different types of AWS Certifications? Read the Blog to find out.

 

39. Can you name the additional network interface that can be created and attached to your Amazon EC2 instance launched in your VPC?

AWS Elastic Network Interface

 

40. Out of the following options, where does the user specify the maximum number of instances with the autoscaling commands?

Autoscaling policy configuration

Autoscaling group

Autoscaling tags configuration

Autoscaling group configuration

Autoscaling Group configuration

 

41. Which service provided by AWS can you use to transfer objects from your data center, when you are using Amazon CloudFront?

Amazon Direct Connect. It is an AWS networking service that acts as an alternative to using the Internet to connect customers in on-premise sites with AWS.

 

42. You have deployed multiple EC2 instances across multiple availability zones to run your website. You have also deployed a Multi-AZ RDS MySQL Extra Large DB Instance. The site performs a high number of small read and write operations per second. After some time, you observed that there is read contention on RDS MySQL. What would be your approach to resolve the contention and optimize your website?

We can deploy ElastiCache in-memory cache running in every availability zone. This will help in creating a cached version of the website for faster access in each availability zone. We can also add an RDS MySQL read replica in each availability zone that can help with efficient and better performance for read operations. So, there will not be any increased workload on the RDS MySQL instance, hence resolving the contention issue.

 

43. Your company wants you to propose a solution so that the company’s data center can be connected to the Amazon cloud network. What would your proposal be?

The data center can be connected to the Amazon cloud network by establishing a virtual private network (VPN) between the VPC and the data center. A virtual private network lets you establish a secure pathway or tunnel from your premise or device to the AWS global network.

 

44. Which of the following Amazon Services would you choose if you want complex querying capabilities but not a whole data warehouse?

RDS

Redshift

ElastiCache

DynamoDB

Amazon RDS

 

45. You want to modify the security group rules while it is being used by multiple EC2 instances. Will you be able to do that? If yes, will the new rules be implemented on all previously running EC2 instances that were using that security group?

Yes, the security group that is being used by multiple EC2 instances can be modified. The changes will be implemented immediately and applied to all the previously running EC2 instances without restarting the instances

 

Become a Cloud and DevOps Architect

 

46. Which one of the following is a structured data store that supports indexing and data queries to both EC2 and S3?

DynamoDB

MySQL

Aurora

SimpleDB

SimpleDB

 

47. Which service offered by Amazon will you choose if you want to collect and process e-commerce data for near real-time analysis? (Choose any two)

DynamoDB

Redshift

Aurora

SimpleDB

DynamoDB

Redshift

Aurora

SimpleDB

DynamoDB. DynamoDB is a fully managed NoSQL database service that can be fed any type of unstructured data. Hence, DynamoDB is the best choice for collecting data from e-commerce websites. For near-real-time analysis, we can use Amazon Redshift.

 

48. If in CloudFront the content is not present at an edge location, what will happen when a request is made for that content?

CloudFront will deliver the content directly from the origin server. It will also store the content in the cache of the edge location where the content was missing.

 

49. Can you change the private IP address of an EC2 instance while it is running or in a stopped state?

No, it cannot be changed. When an EC2 instance is launched, a private IP address is assigned to that instance at boot time. This private IP address is attached to the instance for its entire lifetime and can never be changed.

 

50. Which of the following options will you use if you have to move data over long distances using the Internet, from instances that are spread across countries to your Amazon S3 bucket?

Amazon CloudFront

Amazon Transfer Acceleration

Amazon Snowball

Amazon Glacier

Amazon Transfer Acceleration. It throttles the data transfer up to 300 percent using optimized network paths and Amazon Content Delivery Network. Snowball cannot be used here as this service does not support cross-region data transfer.

 

51. Which of the following services is a data storage system that also has a REST API interface and uses secure HMAC-SHA1 authentication keys?

Amazon Elastic Block Store

Amazon Snapshot

Amazon S3

Amazon S3. It gets various requests from applications, and it has to identify which requests are to be allowed and which are to be denied. Amazon S3 REST API uses a custom HTTP scheme based on a keyed HMAC for the authentication of requests.

 

52. What is EC2?

Launched in 2006, EC2 is a virtual machine that you can use to deploy your servers in the cloud, giving you OS-level control. It helps you have control over the hardware and updates, similar to the case of on-premise servers. EC2 can run on either of these operating systems- Microsoft and Linux. It can also support applications like Python, PHP, Apache, and more.

 

Learn more about AWS Secrets Manager in the blog by Intellipaat.

 

53. What is Snowball?

Snowball is an application designed for transferring terabytes of data into and outside of the AWS cloud. It uses secured physical storage to transfer the data. Snowball is considered a petabyte-scale data transport solution that helps with cost and time savings.

 

54. What is CloudWatch?

The Amazon CloudWatch is used for monitoring and managing data and getting actionable insights for AWS, on-premise applications, etc. It helps you monitor your entire task stack which includes the applications, infrastructure, and services. Apart from this, CloudWatch also assists you in optimizing your resource utilization and cost by providing analytics-driven insights.

 

55. What is Elastic Transcoder?

In the AWS cloud, the Elastic Transcoder is used for converting media files into versions that can be run/played on devices such as Tablets, PCs, Smartphones, etc. It consists of advanced transcoding features with conversion rates starting from $ 0.0075 per minute.

 

Learn more about Amazon systems manager in the blog by intellipaat.

 

56. What does an AMI include?

AMI stands for Amazon Machine Images. It includes the following:

 

Single or multiple Amazon Elastic Block Store (Amazon EBS) snapshots. Templates for the root volume of the instance.

Launch permissions that let AWS accounts use AMI to launch instances.

A block device mapping specifies what volumes are to be attached to the instance during its launch.

57. What are the Storage Classes available in Amazon S3?

The following storage classes are available in Amazon S3:

 

S3 Standard- It is by and large the default storage class. In cases where no specification about the storage class is provided while uploading the object, Amazon S3 assigns the S3 Standard storage class by default.

Reduced Redundancy- It is assigned when non-critical, reproducible data needs to be stored. The Reduced Redundancy Storage class is designed in a way that the above data categories can be stored with less redundancy.

However, it is always advisable to go ahead with the S3 Standard storage class.

 

58. What are the native AWS security logging capabilities?

The native AWS security logging capabilities include AWS CloudTrail, AWS Config, AWS detailed billing reports, Amazon S3 access logs, Elastic load balancing Access logs, Amazon CloudFront access logs, Amazon VPC Flow logs, etc. To learn about native AWS security logging capabilities in detail, click here.

 

Take up the AWS Masters Certification Course by Intellipaat and upgrade your skill set.

 

59. What kind of IP address can you use for your customer gateway (CGW) address?

We can use the Internet routable IP address, which is a public IP address of your NAT device.

 

AWS Scenario-Based Interview Questions

60. A Company has a running Web Application Server in the N. Virginia region and the server has a large size EBS volume of approximately 500 GB, and to see the demand of business, the company needs to migrate the server from the current region to another AWS account’s Mumbai location. Which is the best way to migrate the server from the current location to the Mumbai region? And what information AWS administrator does require about AWS A/C?

Create an AMI of the server running in the North Virginia region. Once the AMI is created, The administrator will need the 12-digit account number of the #2 AWS account. This is required for copying the AMI which we have created.

 

Once the AMI is successfully copied into the Mumbai region, you can launch the instance using the copied AMI in the Mumbai region. Once the instance is running and if it’s completely operational, the server in the North Virginia region could be terminated. This is the best way to migrate a server to a different account without any hassle.

 

 

 

 

61. A start-up company has a web application based in the US-east-1 region with multiple Amazon EC2 instances running behind an Application Load Balancer across multiple Availability Zones. As the company's user base grows in the US-West-1 region, the company needs a solution with low latency and improved availability. What should a solutions architect do to achieve it.?

You need to notice here that, currently, the web application is in the US-ease-1, and the user base grows in the US-east-1 region. The very first step, provision multiple EC2 instances (web application servers) and configure an Application Load Balancer in us-west-1. Now, create Global Accelerator in AWS Global Accelerator which uses an endpoint group that includes the load balancer endpoints in both regions.

 

Read this blog about AWS services to know more.

 

62. A company currently operates a web application backed by an Amazon RDS MySQL database. It has automated backups that are run daily and are not encrypted. A security audit requires future backups to be encrypted and unencrypted backups to be destroyed. The company will make at least one encrypted backup before destroying the old backups. What should be done to enable encryption for future backups?

Create a snapshot of the database.

Copy it to an encrypted snapshot.

Restore the database from the encrypted snapshot.

63. A company is going to launch one branch in the UK and needs to continue with its existing main branch in the USA. The company has almost 15 GB of data which is stored in an S3 Bucket in the Ohio region and data is stored with the default storage class. The Company also wants to provide its updated & stored data in the London S3 bucket using one zone accessibility storage class to save storage costs. In addition, the company also wants the data must be updated automatically in S3’s London bucket; if any data is modified or written in the S3 bucket in Ohio.

Configure the Cross Region Replication Rule in the Ohio region bucket and select the destination bucket in the London region to replicate the data and store it in the destination using one zone IA storage class to save cost.

 

64. You are an AWS Architect in your company, and you are asked to create a new VPC in the N.Virginia Region with two Public and two Private subnets using the following CIDR blocks:

VPC CIDR = 10.10.10.0/24

 

Public Subnet

 

Subnet01 : 10.10.10.0/26

Subnet02 : 10.10.10.64/26

 

Private Subnet

 

Subnet03: 10.10.10.128/26

Subnet04: 10.10.10.192/26

 

Using the above CIDRs you created a new VPC, and you launched EC2 instances in all subnets as per the need.

 

Now, you are facing an issue in private instances that you are unable to update operating systems from the internet. So, what architectural changes and configurations will you suggest to resolve the issue?

 

NAT G/W is to be installed in one public subnet and will configure the route table associated with private subnets to add NAT G/W entry to provide internet access to private instances.

 

65. The data on the root volumes of store-backed and EBS-backed instances get deleted by default when they are terminated. If you want to prevent that from happening, which instance would you use? Ensure that if the EC2 instance is restarted, the data or configuration in the EC2 instance should not be lost.

EBS-backed instances or instances with EBS Volume. EBS-backed instances use EBS volume as their root volume. These volumes contain Operating Systems, Applications, and Data. We can create Snapshots from these volumes or AMI from Snapshots.

 

The main advantage of EBS-backed volumes is that the data can be configured to be stored for later retrieval even if the virtual machine or instances are shut down.

 

66. You have an application running on an EC2 instance. You need to reduce the load on your instance as soon as the CPU utilization reaches 80 percent. How will you accomplish the job?

It can be done by creating an autoscaling group to deploy more instances when the CPU utilization of the EC2 instance exceeds 80 percent and distributing traffic among instances by creating an application load balancer and registering EC2 instances as target instances.

 

67. In AWS, three different storage services are available, such as EFS, S3, and EBS. When should I use Amazon EFS vs. Amazon S3 vs. Amazon Elastic Block Store (EBS)?

Amazon Web Services (AWS) offers cloud storage services to support a wide range of storage workloads.

 

Amazon EFS is a file storage service for use with Amazon compute (EC2, containers, and serverless) and on-premises servers. Amazon EFS provides a file system interface, file system access semantics (such as strong consistency and file locking), and concurrently accessible storage for up to thousands of Amazon EC2 instances.

 

Amazon EBS is a block-level storage service for use with Amazon EC2. Amazon EBS can deliver performance for workloads that require the lowest latency for access to data from a single EC2 instance.

 

Amazon S3 is an object storage service. Amazon S3 makes data available through an Internet API that can be accessed anywhere.

 

68. A company's web application is using multiple Linux Amazon EC2 instances and stores data on Amazon EBS volumes. The company is looking for a solution to increase the resiliency of the application in case of a failure and to provide storage that complies with atomicity, consistency, isolation, and durability (ACID). What should a solution architect do to meet these requirements?

Create an Application Load Balancer with AWS Auto Scaling groups across multiple Availability Zones. Store data on Amazon EFS and mount a target on each instance.

 

69. An application running on AWS uses an Amazon Aurora Multi-AZ deployment for its database. When evaluating performance metrics, a solutions architect discovered that the database reads were causing high I/O and adding latency to the write requests against the database. What should the solution architect do to separate the read requests from the write requests?

Create a read replica and modify the application to use the appropriate endpoint.

 

70. A client reports that they wanted to see an audit log of any changes made to AWS resources in their account. What can the client do to achieve this?

Enable AWS CloudTrail logs to be delivered to an Amazon S3 bucket.

 

71. Usually, you have noticed that one EBS volume can be connected with one EC2 instance, our company wants to run a business-critical application on multiple instances in a single region and needs to store all instances output in single storage within the VPC. Instead of using EFS, our company is recommending the use of multi-attach volume with instances. As an architect, you need to suggest to them what instance type and EBS volumes they should use.

The instance type should be EC2 Nitro-based instances and Provisioned IOPs io1 multi-attach EBS volumes.

 

72. A company is using a VPC peering connection option to connect its multiple VPCs in a single region to allow for cross-VPC communication. A recent increase in account creation and VPCs has made it difficult to maintain the VPC peering strategy, and the company expects to grow to hundreds of VPCs. There are also new requests to create site-to-site VPNs with some of the VPCs. A solutions architect has been tasked with creating a central networking setup for multiple accounts and VPNs. Which networking solution would you recommend to resolve it?

Configure a transit gateway with AWS Transit Gateway and connect all VPCs and VPNs.

 

73. An organization has multiple facilities in various continents such as North America, Europe, and the Asia Pacific. The organization is designing a new distributed application to manage and optimize its global supply chain and its manufacturing process. It needs to design the process in such a way that the booked order in one continent should be able to support data failover with a short Recovery Time Objective (RTO). The uptime of the application should not impact manufacturing, what kind of solution would you recommend as a solution architect?

Use Amazon DynamoDB global tables feature for the database

 

AWS Cloud Computing Interview Questions

74. What do you understand by VPC?

VPC is the abbreviated form of Virtual Private Cloud. It allows you to launch AWS resources that can be defined by you and fully customize the network configurations. Through VPC, you can define and take full control of your virtual network environment. For example- you can have a private address range, internet gateways, subnets, etc.

 

75. What are key pairs?

When connecting to an Amazon EC2 instance, you need to prove your identity. Key pairs are used to execute this. A key pair is a set of security credentials that are used during identity proofing. It consists of a public key and a private key.

 

76. What are policies and what are the different types of policies?

Policies define the permissions required to execute an operation, irrespective of the method used to perform it. AWS supports six types of policies:

 

Identity-based policies

Resource-based policies

Permissions boundaries

Organizations SCPs

ACLs

Session policies

1- Identity-based policies- These are JSON permissions policy documents that control what actions an identity can perform, under what conditions, and on which resources. These policies are further classified into 2 categories:

 

Managed Policies– These policies are standalone identity-based policies that can be attached to different users, and groups in your AWS environment.

Inline policies- These policies are directly attached to a single user, group, or role. In situations where inline policies are used, a strict one-to-one relationship between a policy and an identity is maintained.

2- Resource-based policies- These policies are the ones attached to a resource such as an Amazon S3 bucket. They define which actions can be performed on the particular resource and under what circumstances.

 

3- IAM permissions boundaries- They refer to the maximum level of permissions that identity-based policies can grant to the specific entity.

 

4- Service Control Policies (SCPs)- SCPs are the maximum level of permissions for an organization or organizational unit.

 

5- Access Control lists- They define and control which principals in another AWS account can access the particular resource.

 

6- Session policies- They are advanced policies that are passed as a parameter when a temporary session is programmatically created for a role or federated user.

 

77. Which of the following is not an option in security groups?

List of users

Ports

IP addresses

List of protocols

List of users

List of Users

 

Hope these top AWS cloud Interview questions and answers for freshers and experienced, help you in preparing for top AWS jobs in the Cloud market.

 

78. Unable to ping Instance We launched a Windows 2019 IIS server in the Ohio region and deployed a dynamic website in this server, in addition, the webserver also connected with a backend MS-SQL server to store and access data related to the application. Our users were able to access the website over the Internet. The next day our client informed us that they were able to access the website, but weren’t able to ping the server from the Internet. To ensure the ICMP rule in the Security Group, we checked, and the Security Group had allowed the rule from 0.0.0.0/0. Would you try to help troubleshoot the issue?

If the client can access the website from his/her end, it means the connection is perfect and there is no issue with connectivity and the Security Group configuration also seems correct.

 

We can check the internal firewall of the Windows 2019 IIS server. If it is blocking ICMP traffic, we should enable it.

 

AWS Glue Interview Questions

79. Mention some of the significant features of AWS Glue.

Here are some of the significant features of AWS Glue listed below.

 

Serverless data integration

Data quality and monitoring

Glue can automatically find structured and semi-structured data kept in your data lake on Amazon S3, data warehouse on Amazon Redshift, and other storage locations.

Automatically creates Scala or Python code for your ETL tasks.

Allows you to visually clean and normalize data without any code.

80. In AWS Glue, how do you enable and disable a trigger?

You can execute the below commands to start or stop the trigger using the AWS CLI.

 

aws glue start-trigger –name MyTrigger

aws glue stop-trigger –name MyTrigger

81. What is a connection in AWS Glue?

Connection in AWS Glue is a service that stores information required to connect to a data source such as Redshift, RDS, S3, or DynamoDB.

 

82. How can you start an AWS Glue workflow run using AWS CLI?

Using the start-workflow-run command of AWS CLI and passing the workflow name, one can start the Glue workflow.

 

83. What is AWS Glue?

AWS Glue is a data integration service offered by Amazon that makes it easy to discover, prepare, move, and transform your data for analytics and application development.

 

84. What data sources does AWS Glue support?

AWS Glue can integrate with more than 80 data sources on AWS, on-premises, and on other clouds. The service natively supports data stored in the following databases in your Amazon Virtual Private Cloud (Amazon VPC) running on Amazon Elastic Compute Cloud (Amazon EC2):

 

Amazon Aurora

Amazon RDS for MySQL

Amazon RDS for Oracle

Amazon RDS for PostgreSQL

Amazon RDS for SQL Server

Amazon Redshift

Amazon DynamoDB

Amazon S3

MySQL, Oracle, Microsoft SQL Server, and PostgreSQL

85. What programming language can we use to write my ETL code for AWS Glue?

We can use either Scala or Python.

 

86. Can we write our code?

Yes. We can write your code using the AWS Glue ETL library.

 

87. How does AWS Glue keep my data secure?

AWS Glue provides server-side encryption for data at rest and SSL for data in motion.

 

88. How do we monitor the execution of my AWS Glue jobs?

AWS Glue provides the status of each job and pushes all notifications to CloudWatch.

 

AWS S3 Interview Questions

89. Explain what S3 is.

S3 stands for Simple Storage Service. It is an object-based storage service on AWS. It is a pay-as-you-go service with the help of which you can store and extract any amount of data at any time from anywhere on the web.

 

90. What is the Replication Rule feature supported by AWS S3?

Amazon S3 offers a lot of useful features. One of the features is the Replication Rule feature, which allows users to replicate the data to a secondary region.

 

91. What are the different ways to encrypt a file in S3?

To encrypt a file in Amazon S3, users need to choose an appropriate encryption option. AWS S3 offers multiple encryption options such as:

 

Server-Side Encryption with Amazon S3 Managed Keys (SSE-S3)

Server-Side Encryption with AWS Key Management Service (SSE-KMS)

Server-Side Encryption with Customer-Provided Keys (SSE-C)

Client-Side Encryption.

92. What is static website hosting in S3?

Static website hosting in S3 is a feature that allows users to host static web content directly from an S3 bucket.

 

93. Is there any possible way to restore the deleted S3 objects?

Yes, you can restore the deleted S3 objects easily if you have a versioning-enabled bucket.

 

94. What is the maximum number of S3 buckets you can create?

The maximum number of S3 buckets that can be created is 100.

 

95. How will you configure an Amazon S3 bucket to serve static assets for your public web application?

By configuring the bucket policy to provide public read access to all objects.

 

96. What is an S3 general-purpose bucket?

A bucket is a container for objects stored in Amazon S3, and you can store any number of objects in a bucket. General purpose buckets are the original S3 bucket type, and a single general purpose bucket can contain objects stored across all storage classes except S3 Express One Zone. They are recommended for most use cases and access patterns.

 

97. How is Amazon S3 data organized?

Amazon S3 is a simple key-based object store. When you store data, you assign a unique object key that can later be used to retrieve the data. Keys can be any string, and they can be constructed to mimic hierarchical attributes. Alternatively, you can use S3 Object Tagging to organize your data across all of your S3 buckets and/or prefixes.

 

AWS EC2 Interview Questions

98. What do you understand about Amazon EC2?

Elastic Cloud Compute (EC2) is a cloud computing service provided by Amazon Web Services (AWS) that is used for hosting applications in the cloud.

 

99. What is the Security Group in Amazon EC2?

Amazon Security Groups control both inbound and outbound traffic, acting like a virtual firewall for your EC2 instances.

 

100. Explain Stop vs. Terminate an Amazon EC2 instance.

Both stopping and terminating the Amazon EC2 instance have their purpose and consequences.

 

Feature Stop       Terminate

Action   Halts the EC2 instance    Deletes the instance

Restart  Possible                Not possible

Use Case              Temporary pause              Long-term shutdowns

101. What is the use of regions and availability zones in Amazon EC2 configuration?

To deploy your EC2 instance geographically, you need to choose the region. To distribute these instances across multiple zones for improved fault tolerance and high availability, you need to select the specific availability zone within that region.

 

102. What is the placement group in EC2?

A placement group in Amazon EC2 is a logical grouping of instances within a single availability zone.

 

103. You have an application running on your Amazon EC2 instance. You want to reduce the load on your instance as soon as the CPU utilization reaches 100 percent. How will you do that?

It can be done by creating an autoscaling group to deploy more instances when the CPU utilization exceeds 100 percent and distributing traffic among instances by creating a load balancer and registering the Amazon EC2 instances with it.

 

104. Suppose, I created a subnet and launched an EC2 instance in the subnet with default settings. Which of the following options will be ready to use on the EC2 instance as soon as it is launched?

Elastic IP

Private IP

Public IP

Internet Gateway

Private IP. Private IP is automatically assigned to the instance as soon as it is launched. While elastic IP has to be set manually, Public IP needs an Internet Gateway which again has to be created since it’s a new VPC.

 

105. Can you name the additional network interface that can be created and attached to your Amazon EC2 instance launched in your VPC?

AWS Elastic Network Interface

 

106. You want to modify the security group rules while it is being used by multiple EC2 instances. Will you be able to do that? If yes, will the new rules be implemented on all previously running EC2 instances that were using that security group?

Yes, the security group that is being used by multiple EC2 instances can be modified. The changes will be implemented immediately and applied to all the previously running EC2 instances without restarting the instances.

 

107. Can you change the private IP address of an EC2 instance while it is running or in a stopped state?

No, it cannot be changed. When an EC2 instance is launched, a private IP address is assigned to that instance at boot time. This private IP address is attached to the instance for its entire lifetime and can never be changed.

 

AWS Interview Questions for Solution Architect

108. What is Amazon Virtual Private Cloud (VPC), and why is it used?

Amazon Virtual Private Cloud (VPC) is a service offered by AWS that enables users to create their isolated virtual network inside the AWS cloud.

 

109. What is an AWS Load Balancer?

AWS Load Balancer is a resource that is used to distribute the traffic across the EC2 instance.

 

110. What is AWS SNS?

Amazon Simple Notification Service (SNS) is a fully managed push notification service that sends messages to mobile and other distributed systems.

 

111. What are the different types of load balancers in EC2?

There are 3 types of Load Balancers in EC2.

 

Application Load Balancer (ALB)

Network Load Balancer (NLB)

Classic Load Balancer (CLB)

112. What is AWS CloudFormation?

AWS CloudFormation is an IaC (Infrastructure as Code) service offered by AWS so that you can deploy your infrastructure redundantly with the help of a code (json or Yaml).

 

113. What would I have to do if I wanted to access Amazon Simple Storage buckets and use the information for access audits?

AWS CloudTrail can be used in this case as it is designed for logging and tracking API calls, and it has also been made available for storage solutions.

 

114. What are the native AWS security logging capabilities?

The native AWS security logging capabilities include AWS CloudTrail, AWS Config, AWS detailed billing reports, Amazon S3 access logs, Elastic load balancing Access logs, Amazon CloudFront access logs, Amazon VPC Flow logs, etc.

 

115. You have an application running on an EC2 instance. You need to reduce the load on your instance as soon as the CPU utilization reaches 80 percent. How will you accomplish the job?

It can be done by creating an autoscaling group to deploy more instances when the CPU utilization of the EC2 instance exceeds 80 percent and distributing traffic among instances by creating an application load balancer and registering EC2 instances as target instances.

 

116. Is it possible to switch from an Instance-backed root volume to an EBS-backed root volume at any time?

No, it is not possible.

 

117. Your organization has around 50 IAM users. Now, it wants to introduce a new policy that will affect the access permissions of an IAM user. How can it implement this without having to apply the policy at the individual user level?

It is possible to use AWS IAM groups, by adding users in the groups as per their roles and by simply applying the policy to the groups.

 

AWS VPC Interview Questions

118. What is the difference between stateful and stateless filtering?

Stateful filtering evaluates the origin IP address for any request coming to your server, whereas stateless filtering evaluates both the origin and the destination IP address.

 

119. What are the internet gateways in VPC?

Internet gateways are components that allow resources within your VPC to communicate to and from the Internet.

 

120. What are the security groups in a VPC?

Security groups are essential elements that handle the flow of data within the Virtual Private Cloud. They possess the capability to function as formidable firewalls by employing distinct regulations that govern the movement of information. These security groups exist both within the Virtual Private Cloud and in the EC2 segments of the Amazon Web Services console.

 

121. What is an Elastic IP address?

An Elastic IP address is more like a public IP address, which is connected to the AWS account until a user terminates it.

 

122. What is a NAT device?

NAT stands for Network Address Translation and as the name implies, it replaces the IP address for devices that are running in our network, i.e., when we send any traffic from any device to the internet, it will replace the IP address of the device with another IP address.

 

123. How many total VPCs per account/region and subnets per VPC can you have?

We can have a total of 5 VPCs for every account/region and 200 subnets for every VPC that you have.

 

124. You accidentally stopped an EC2 instance in a VPC with an associated Elastic IP. If you start the instance again, what will be the result?

Elastic IP will only be disassociated from the instance if it’s terminated. If it’s stopped and started, there won’t be any change to the instance, and no data will be lost.

 

125. You have created a VPC with private and public subnets. In what kind of subnet would you launch the database servers?

Database servers should be ideally launched on private subnets. Private subnets are ideal for the backend services and databases of all applications since they are not meant to be accessed by the users of the applications, and private subnets are not routable from the Internet.

 

126. How many subnets can we create per VPC?

Currently, you can create 200 subnets per VPC.

 

127. Can I monitor the network traffic in my VPC?

Yes. You can use Amazon VPC traffic mirroring and Amazon VPC flow logs features to monitor the network traffic in your Amazon VPC.

 

AWS Sal

 

 

 

 

 

 

 

Scenario based qsns

 

 

Q : Your website experiences varying levels of traffic throughout the day. How can you ensure that your Amazon EC2 instances automatically scale up and down based on demand?

You can use Amazon EC2 Auto Scaling to automatically adjust the number of instances based on predefined scaling policies. Define scaling policies based on metrics like CPU utilization or network traffic. When traffic increases, EC2 Auto Scaling adds more instances, and when traffic decreases, it removes instances, ensuring optimal performance and cost efficiency.

Reference Links:

 

Mastering Amazon Auto Scaling: Practical POCs and Use Cases

Q : You have an application that requires extremely low-latency communication between instances. How can you achieve this on Amazon EC2?

To achieve low-latency communication between instances, you can use EC2 Placement Groups. Placement Groups enable instances to be placed in close proximity within the same Availability Zone (AZ). There are two types of Placement Groups: Cluster Placement Groups for low-latency and High-Performance Computing (HPC) workloads and Spread Placement Groups for critical instances that require maximum separation to minimize the risk of simultaneous failure.

Q : Your application needs to store sensitive data, and you want to ensure that the data is encrypted at rest on EC2 instances. How can you enable this encryption?

To encrypt data at rest on EC2 instances, you can use Amazon Elastic Block Store (EBS) volumes with encryption enabled. When creating or modifying an EBS volume, you can specify the use of AWS Key Management Service (KMS) to manage the encryption keys. Data written to the EBS volume is automatically encrypted, and it remains encrypted at rest.

Q : Your team is developing a containerized application and wants to deploy it on EC2 instances. Which service can you use to manage the containers on EC2 efficiently?

You can use Amazon Elastic Container Service (ECS) or Amazon Elastic Kubernetes Service (EKS) to manage containers on EC2 instances. ECS is a fully-managed service for running containers at scale, while EKS provides Kubernetes management capabilities for container orchestration. Both services simplify the process of deploying, managing, and scaling containerized applications on EC2 instances.

Q : Your application requires GPU capabilities for machine learning or graphics-intensive workloads. How can you launch EC2 instances with GPU support?

You can launch EC2 instances with GPU support by selecting an instance type that offers GPU resources. Examples of such instances include the “p3” and “g4” instance families. These instances are optimized for different GPU workloads, and you can choose the one that best fits your specific use case.

Q : You need to ensure that your EC2 instances are running in a private network and are not directly accessible from the internet. How can you achieve this?

To run EC2 instances in a private network without direct internet access, you can place them in an Amazon Virtual Private Cloud (VPC) with private subnets. To access the instances securely, you can set up a bastion host (jump host) in a public subnet, which acts as a gateway for connecting to the private instances through SSH or RDP.

Q : You want to enhance the security of your EC2 instances by restricting incoming traffic only to specific IP addresses. How can you implement this security measure?

To restrict incoming traffic to specific IP addresses on EC2 instances, you can configure security group rules. Security groups act as virtual firewalls and allow you to control inbound and outbound traffic. By specifying the desired IP ranges in the inbound rules, you can ensure that only traffic from those IP addresses is allowed to reach the instances.

Q : Your organization needs to store and share data files across multiple EC2 instances. What service can you use to achieve scalable and durable file storage?

You can use Amazon Elastic File System (EFS) to achieve scalable and durable file storage for multiple EC2 instances. EFS provides a managed file system that can be mounted concurrently by multiple instances within a VPC. It supports the Network File System (NFS) protocol and automatically scales capacity as data grows.

Q : Your team wants to minimize the cost of running EC2 instances for non-production environments (e.g., development and testing). How can you achieve cost savings without compromising availability?

To minimize costs for non-production environments while maintaining high availability, you can use EC2 Spot Instances. Spot Instances allow you to bid on spare EC2 capacity, and they can significantly reduce costs compared to On-Demand Instances. However, keep in mind that Spot Instances can be terminated when the Spot price exceeds your bid, so they are best suited for stateless and fault-tolerant workloads.

Q : Your application requires the ability to quickly recover from instance failure and ensure data durability. What type of Amazon EBS volume is recommended for such scenarios?

To ensure data durability and quick recovery from instance failure, you can use Amazon EBS volumes with the “io1” (Provisioned IOPS) type. “io1” volumes provide the highest performance and reliability and are ideal for critical workloads that demand consistent and low-latency performance.

Q : Your organization needs to control the launch permissions of Amazon Machine Images (AMIs) and prevent accidental termination of EC2 instances. What AWS service can help you manage these permissions effectively?

You can use AWS Identity and Access Management (IAM) to manage launch permissions for AMIs and control who can launch instances from specific AMIs. IAM allows you to define policies that restrict or grant permissions for different users or groups. Additionally, IAM roles can be used to control what actions EC2 instances are allowed to perform, reducing the risk of accidental terminations.

Q : Your organization needs to host a web application that requires consistent CPU performance and low latency. Which EC2 instance type would you recommend, and why?

For applications requiring consistent CPU performance and low latency, I would recommend using an EC2 instance from the “c5” or “m5” instance families. Both families are designed for compute-intensive workloads, with the “c5” instances offering higher CPU performance and the “m5” instances providing a balance of compute and memory resources.

Q : Your application involves batch processing of large datasets. How can you optimize the EC2 instances for such a workload?

For batch processing of large datasets, you can use EC2 instances from the “r5” instance family, which is optimized for memory-intensive workloads. By choosing an instance with sufficient memory, you can avoid performance bottlenecks caused by frequent disk swapping, enhancing the efficiency of your batch processing.

Q : You need to create a cost-effective, scalable, and fault-tolerant web application architecture. How can you achieve this with EC2?

To create a cost-effective, scalable, and fault-tolerant web application architecture, you can use EC2 instances with Elastic Load Balancing (ELB) and Auto Scaling. ELB distributes incoming traffic among multiple EC2 instances, while Auto Scaling automatically adjusts the number of instances based on demand, ensuring optimal performance and cost-efficiency.

Q : Your team is developing a microservices-based application and wants to deploy it on EC2 instances. What are some best practices to ensure the scalability and maintainability of the microservices architecture?

To ensure the scalability and maintainability of a microservices-based application on EC2, consider the following best practices:

  • Deploy each microservice on separate EC2 instances to achieve isolation.
  • Use containerization technology like Docker to package and deploy microservices consistently.
  • Implement an orchestration service like Amazon ECS or Amazon EKS to manage the containerized microservices efficiently.
  • Design microservices with loosely coupled communication to enable independent scaling and deployment.

Q : Your organization needs to run a Windows-based application on EC2 instances. How can you ensure that the instances are automatically updated with the latest Windows patches?

To automatically update Windows-based EC2 instances with the latest patches, you can use AWS Systems Manager Patch Manager. Patch Manager simplifies the process of managing Windows updates by automating patching and providing insights into compliance and patching status.

Q : Your application requires low-latency access to a relational database. How can you optimize EC2 instances to minimize database response times?

To minimize database response times and achieve low-latency access, you can deploy EC2 instances in the same AWS Region and Availability Zone as the database. Additionally, consider using Amazon RDS Read Replicas to offload read traffic from the primary database, reducing the load and improving overall database performance.

Q : Your application must handle spikes in traffic during seasonal promotions. How can you ensure that the EC2 instances scale up automatically during peak times and scale down during off-peak times?

To automatically scale EC2 instances during peak and off-peak times, you can use Amazon EC2 Auto Scaling with scheduled scaling policies. Set up a schedule to increase the desired capacity before the expected peak traffic and decrease it afterward. EC2 Auto Scaling will adjust the number of instances based on the schedule, ensuring you have the right capacity when you need it.

Q : Your organization is migrating a legacy application to AWS EC2. The application requires direct access to the underlying hardware. What EC2 feature can you use to fulfill this requirement?

To gain direct access to the underlying hardware, you can use Amazon EC2 Dedicated Hosts. EC2 Dedicated Hosts provide dedicated physical servers to run your instances, allowing you to launch instances on specific hardware for compliance, licensing, or regulatory requirements.

Q : Your team is running multiple applications on EC2 instances, and you want to optimize costs by leveraging unused compute capacity. What EC2 pricing option should you choose?

To optimize costs and leverage unused compute capacity, you can use Amazon EC2 Spot Instances. Spot Instances allow you to bid on spare EC2 capacity, typically providing significant cost savings compared to On-Demand Instances. However, be aware that Spot Instances can be terminated when the Spot price exceeds your bid.

Q : You need to migrate an on-premises virtual machine (VM) to AWS EC2. What service can you use to simplify the VM migration process?

To simplify the migration of on-premises VMs to AWS EC2, you can use AWS Server Migration Service (SMS). SMS allows you to automate, schedule, and track incremental replications of VMs from your data center to AWS, reducing the complexity of the migration process.

Q : Your application requires frequent changes and updates, and you want to test new features without affecting the production environment. How can you achieve this with EC2?

To test new features and changes without affecting the production environment, you can create an Amazon Machine Image (AMI) of the existing production EC2 instance. Launch a new EC2 instance using the AMI in a separate testing environment. This isolated environment allows you to experiment and validate changes before applying them to the production instance.

Q : You want to implement data encryption in transit for communication between your EC2 instances and Amazon S3. How can you achieve this security measure?

To implement data encryption in transit between EC2 instances and Amazon S3, use Amazon S3 Transfer Acceleration with SSL/TLS encryption enabled. By enabling Transfer Acceleration, data is transferred over an optimized network path with encryption, improving upload and download speeds while ensuring data security.

Q : Your application relies on stateful connections between clients and servers, and you need to preserve these connections even if an EC2 instance fails. What service can you use to achieve this?

To preserve stateful connections even if an EC2 instance fails, you can use Elastic IP addresses (EIPs) in combination with Auto Scaling. Associate an EIP with an EC2 instance to create a static public IP address that remains associated with the instance even if it is terminated. Auto Scaling will automatically replace any failed instances and associate the EIP with the new instance, preserving the client connections.

Q : Your development team needs to share sensitive data securely between EC2 instances. How can you set up a secure communication channel for this purpose?

To set up a secure communication channel between EC2 instances, you can use Virtual Private Cloud (VPC) peering or AWS PrivateLink. VPC peering allows you to connect VPCs within the same AWS account privately. AWS PrivateLink enables secure and private communication between VPCs and supported AWS services without traversing the internet.

Q : Your organization requires on-premises resources to communicate securely with EC2 instances within a VPC. How can you establish a secure connection between your on-premises network and the VPC?

To establish a secure connection between your on-premises network and an EC2 instance within a VPC, you can use AWS Site-to-Site VPN or AWS Direct Connect. Site-to-Site VPN creates an encrypted tunnel over the internet, whereas Direct Connect provides a dedicated connection through a private network link.

Q : Your team wants to ensure that only authorized personnel can access the EC2 instances via SSH. What security measure should be implemented?

To ensure that only authorized personnel can access the EC2 instances via SSH, you should configure the security group rules to allow inbound SSH access only from specific IP addresses or ranges associated with authorized personnel. Additionally, you can manage SSH access using IAM roles and AWS Systems Manager Session Manager for secure remote management.

Q : Your organization wants to ensure that EC2 instances are protected against common security threats and vulnerabilities. What service can you use to monitor and assess the security posture of your instances?

You can use Amazon Inspector to monitor and assess the security posture of your EC2 instances. Amazon Inspector automatically assesses instances for vulnerabilities and security deviations based on predefined rulesets, providing you with detailed findings and recommendations to enhance the security of your environment.

Q : Your application requires high network performance and low latency communication between EC2 instances in different Availability Zones. What service can you use to achieve this requirement?

To achieve high network performance and low latency communication between EC2 instances in different Availability Zones, you can use Enhanced Networking with Elastic Network Adapter (ENA). ENA optimizes network performance for EC2 instances, allowing for faster and more reliable inter-instance communication.

Q : Your team wants to automate the process of managing EC2 instances and their configurations. Which AWS service can you use for this purpose?

You can use AWS Systems Manager to automate the process of managing EC2 instances and their configurations. Systems Manager provides a unified interface for managing EC2 instances, including tasks like patch management, configuration management, and instance inventory.

Q : You need to run Windows-based applications on EC2 instances, and your team requires remote desktop access for management purposes. How can you enable remote desktop access to Windows EC2 instances?

To enable remote desktop access to Windows EC2 instances, you need to configure the Windows Firewall and EC2 Security Groups to allow Remote Desktop Protocol (RDP) traffic (port 3389). Additionally, ensure that you have the necessary credentials to log in to the instances remotely.

Q : Your team wants to monitor the performance of EC2 instances and set up alerts for abnormal behavior. What AWS service can help you achieve this?

To monitor the performance of EC2 instances and set up alerts, you can use Amazon CloudWatch. CloudWatch provides a comprehensive set of monitoring and alerting capabilities, allowing you to collect and track metrics, set alarms, and automatically react to changes in your EC2 instances’ performance.

Q : You want to deploy your web application to multiple regions to ensure high availability and low latency. What AWS service can you use to automate the deployment process across regions?

You can use AWS Elastic Beanstalk to automate the deployment process of your web application across multiple regions. Elastic Beanstalk simplifies application deployment by automatically handling capacity provisioning, load balancing, scaling, and application health monitoring.

Q : Your organization needs to ensure data privacy and compliance by restricting access to EC2 instances based on user roles. How can you achieve this?

To restrict access to EC2 instances based on user roles, you can use AWS Identity and Access Management (IAM) to manage user permissions. Define IAM roles with specific permissions and assign them to users or groups. Users can access the EC2 instances based on the permissions associated with their roles.

Q : Your application requires a mix of Linux and Windows instances to handle different tasks. Can you use the same security groups for both Linux and Windows instances?

Yes, you can use the same security groups for both Linux and Windows instances. Security groups are a stateful firewall that controls inbound and outbound traffic based on rules you define, regardless of the operating system.

Q : Your team wants to ensure that your EC2 instances are accessible over the internet while still being protected from unauthorized access. What security measure can you implement?

To ensure that your EC2 instances are accessible over the internet while being protected, you can use a combination of security groups and Network Access Control Lists (NACLs). Security groups control inbound and outbound traffic for EC2 instances, while NACLs control traffic to and from subnets, providing an additional layer of security.

Q : Your application requires persistent data storage that survives instance termination. What storage option can you use on EC2 for this purpose?

For persistent data storage that survives instance termination, you can use Amazon Elastic Block Store (EBS) volumes. EBS volumes are durable, block-level storage devices that can be attached to EC2 instances and persist independently of the instance lifecycle.

Q : Your organization wants to ensure that EC2 instances are launched only within specific AWS Regions. How can you enforce this policy?

To enforce the launching of EC2 instances within specific AWS Regions, you can use AWS Service Control Policies (SCPs) with AWS Organizations. SCPs allow you to set permissions that apply to the entire organization or specific organizational units, ensuring that instances are launched only in approved regions.

Q : Your application processes a large number of data records, and you want to distribute the workload efficiently across multiple EC2 instances. What AWS service can you use for this purpose?

To distribute the workload efficiently across multiple EC2 instances, you can use Amazon Elastic MapReduce (EMR). EMR is a managed service that simplifies the processing of large datasets using popular data processing frameworks like Apache Hadoop and Apache Spark.

Q : Your team is designing a solution for disaster recovery and business continuity. How can you replicate EC2 instances and data across AWS Regions?

To replicate EC2 instances and data across AWS Regions for disaster recovery, you can use AWS Disaster Recovery Solutions such as AWS Backup, AWS Database Migration Service (DMS), and AWS Lambda functions to automate the replication process.

Q : Your application requires instances with large amounts of storage for database backups and archiving. What EC2 instance family is best suited for this use case?

For applications that require instances with large amounts of storage, you can use EC2 instances from the “i3” or “d2” instance families. These instance families are optimized for storage-intensive workloads, with “i3” instances offering high-performance local NVMe SSD storage, and “d2” instances providing cost-effective HDD storage.

Q : Your application needs to support both IPv4 and IPv6 traffic. How can you ensure that EC2 instances can handle both types of traffic?

To ensure that EC2 instances can handle both IPv4 and IPv6 traffic, you need to enable dual-stack networking on your VPC. With dual-stack enabled, EC2 instances can communicate with both IPv4 and IPv6 addresses.

Q : Your organization needs to run a highly regulated workload that requires strict access control and monitoring. What AWS service can you use to enforce fine-grained access permissions and logging?

To enforce fine-grained access permissions and logging for a highly regulated workload, you can use AWS Identity and Access Management (IAM) with AWS CloudTrail. IAM allows you to manage user access to AWS resources, while CloudTrail provides detailed logs of API calls made by users and services.

Q : Your organization wants to reduce costs for development and testing environments, which are only required during specific hours of the day. How can you achieve cost savings?

To reduce costs for development and testing environments, you can use EC2 Instance Scheduler. EC2 Instance Scheduler allows you to automatically start and stop EC2 instances based on a defined schedule, ensuring that instances are only running when needed.

Enjoyed the scenario-based questions on Amazon EC2?

Follow me on Medium for more engaging content on AWS, Azure, GCP, and beyond.

Let’s connect on LinkedIn for the latest updates.

 

 

 

Basic AWS Interview Questions

Starting with the fundamentals, this section introduces basic AWS interview questions essential for building a foundational understanding. It's tailored for those new to AWS or needing a refresher, setting the stage for more detailed exploration later.

1. What is cloud computing?

Cloud computing provides on-demand access to IT resources like compute, storage, and databases over the internet. Users pay only for what they use instead of owning physical infrastructure.

Cloud enables accessing technology services flexibly as needed without big upfront investments. Leading providers like AWS offer a wide range of cloud services via the pay-as-you-go consumption model. Our AWS Cloud Concepts course covers many of these basics.

2. What is the problem with the traditional IT approach compared to using the Cloud?

Multiple industries are moving away from traditional IT to adopt cloud infrastructures for multiple reasons. This is because the Cloud approach provides greater business agility, faster innovation, flexible scaling and lower total cost of ownership compared to traditional IT. Below are some of the characteristics that differentiate them:

Traditional IT

Cloud computing

  • Requires large upfront capital expenditures
  • Limited ability to scale based on demand
  • Lengthy procurement and provisioning cycles
  • Higher maintenance overhead
  • Limited agility and innovation
  • No upfront infrastructure investment
  • Pay-as-you-go based on usage
  • Rapid scaling to meet demand
  • Reduced maintenance overhead
  • Faster innovation and new IT initiatives
  • Increased agility and responsiveness

3. How many types of deployment models exist in the cloud?

There are three different types of deployment models in the cloud, and they are illustrated below:

  • Private cloud: this type of service is used by a single organization and is not exposed to the public. It is adapted to organizations using sensitive applications.
  • Public cloud: these cloud resources are owned and operated by third-party cloud services like Amazon Web Services, Microsoft Azure, and all those mentioned in the AWS market share section.
  • Hybrid cloud: this is the combination of both private and public clouds. It is designed to keep some servers on-premises while extending the remaining capabilities to the cloud. Hybrid cloud provides flexibility and cost-effectiveness of the public cloud.

4. What are the five characteristics of cloud computing?

Cloud computing is composed of five main characteristics, and they are illustrated below:

  • On-demand self-service: Users can provision cloud services as needed without human interaction with the service provider.
  • Broad network access: Services are available over the network and accessed through standard mechanisms like mobile phones, laptops, and tablets.
  • Multi-tenacy and resource pooling: Resources are pooled to serve multiple customers, with different virtual and physical resources dynamically assigned based on demand.
  • Rapid elasticity and scalability: Capabilities can be elastically provisioned and scaled up or down quickly and automatically to match capacity with demand.
  • Measured service: Resource usage is monitored, controlled, reported, and billed transparently based on utilization. Usage can be managed, controlled, and reported, providing transparency for the provider and consumer.

5. What are the main types of Cloud Computing?

There are three main types of cloud computing: IaaS, PaaS, and SaaS

  • Infrastructure as a Service (IaaS): Provides basic building blocks for cloud IT like compute, storage, and networking that users can access on-demand without needing to manage the underlying infrastructure. Examples: AWS EC2, S3, VPC.
  • Platform as a Service (PaaS): Provides a managed platform or environment for developing, deploying, and managing cloud-based apps without needing to build the underlying infrastructure. Examples: AWS Elastic Beanstalk, Heroku
  • Software as a Service (SaaS): Provides access to complete end-user applications running in the cloud that users can use over the internet. Users don't manage infrastructure or platforms. Examples: AWS Simple Email Service, Google Docs, Salesforce CRM.

You can explore these in more detail in our Understanding Cloud Computing course.

6. What is Amazon EC2, and what are its main uses?

Amazon EC2 (Elastic Compute Cloud) provides scalable virtual servers called instances in the AWS Cloud. It is used to run a variety of workloads flexibly and cost-effectively. Some of its main uses are illustrated below:

  • Host websites and web applications
  • Run backend processes and batch jobs
  • Implement hybrid cloud solutions
  • Achieve high availability and scalability
  • Reduce time to market for new use cases

7. What is Amazon S3, and why is it important?

Amazon Simple Storage Service (S3) is a versatile, scalable, and secure object storage service. It serves as the foundation for many cloud-based applications and workloads. Below are a few features highlighting its importance:

  • Durable with 99.999999999% durability and 99.99% availability, making it suitable for critical data.
  • Supports robust security features like access policies, encryption, VPC endpoints.
  • Integrates seamlessly with other AWS services like Lambda, EC2, EBS, just to name a few.
  • Low latency and high throughput make it ideal for big data analytics, mobile applications, media storage and delivery.
  • Flexible management features for monitoring, access logs, replication, versioning, lifecycle policies.
  • Backed by the AWS global infrastructure for low latency access worldwide.

8. Explain the concept of ‘Regions’ and ‘Availability Zones’ in AWS

  • AWS Regions correspond to separate geographic locations where AWS resources are located. Businesses choose regions close to their customers to reduce latency, and cross-region replication provides better disaster recovery.
  • Availability zones consist of one or more discrete data centers with redundant power, networking, and connectivity. They allow the deployment of resources in a more fault-tolerant way.

Our course AWS Cloud Concepts provides readers with a complete guide to learning about AWS’s main core services, best practices for designing AWS applications, and the benefits of using AWS for businesses.

Become a Data Engineer

Become a data engineer through advanced Python learning

Start Learning for Free

AWS Interview Questions for Intermediate and Experienced

AWS DevOps interview questions

Moving to specialized roles, the emphasis here is on how AWS supports DevOps practices. This part examines the automation and optimization of AWS environments, challenging individuals to showcase their skills in leveraging AWS for continuous integration and delivery. If you're going for an advanced AWS role, check out our Data Architect Interview Questions blog post to practice some data infrastructure and architecture questions.

9. How do you use AWS CodePipeline to automate a CI/CD pipeline for a multi-tier application?

CodePipeline can be used to automate the flow from code check-in to build, test, and deployment across multiple environments to streamline the delivery of updates while maintaining high standards of quality.

The following steps can be followed to automate a CI/CD pipeline:

  • Create a Pipeline: Start by creating a pipeline in AWS CodePipeline, specifying your source code repository (e.g., GitHub, AWS CodeCommit).
  • Define Build Stage: Connect to a build service like AWS CodeBuild to compile your code, run tests, and create deployable artifacts.
  • Setup Deployment Stages: Configure deployment stages for each tier of your application. Use AWS CodeDeploy to automate deployments to Amazon EC2 instances, AWS Elastic Beanstalk for web applications, or AWS ECS for containerized applications.
  • Add Approval Steps (Optional): For critical environments, insert manual approval steps before deployment stages to ensure quality and control.
  • Monitor and Iterate: Monitor the pipeline's performance and adjust as necessary. Utilize feedback and iteration to continuously improve the deployment process.

10. What key factors should be considered in designing a deployment solution on AWS to effectively provision, configure, deploy, scale, and monitor applications?

Creating a well-architected AWS deployment involves tailoring AWS services to your app's needs, covering compute, storage, and database requirements. This process, complicated by AWS's vast service catalog, includes several crucial steps:

  • Provisioning: Set up essential AWS infrastructure such as EC2, VPC, subnets or managed services like S3, RDS, CloudFront for underlying applications.
  • Configuring: Adjust your setup to meet specific requirements related to the environment, security, availability, and performance.
  • Deploying: Efficiently roll out or update app components, ensuring smooth version transitions.
  • Scaling: Dynamically modify resource allocation based on predefined criteria to handle load changes.
  • Monitoring: Keep track of resource usage, deployment outcomes, app health, and logs to ensure everything runs as expected.

11. What is Infrastructure as a Code? Describe in your own words

Infrastructure as Code (IaC) is a method of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools.

Essentially, it allows developers and IT operations teams to automatically manage, monitor, and provision resources through code, rather than manually setting up and configuring hardware.

Also, IaC enables consistent environments to be deployed rapidly and scalably by codifying infrastructure, thereby reducing human error and increasing efficiency.

12. What is your approach to handling continuous integration and deployment in AWS DevOps?

In AWS DevOps, continuous integration and deployment can be managed by utilizing AWS Developer Tools. Begin by storing and versioning your application's source code with these tools.

Then, leverage services like AWS CodePipeline for orchestrating the build, test, and deployment processes. CodePipeline serves as the backbone, integrating with AWS CodeBuild for compiling and testing code, and AWS CodeDeploy for automating the deployment to various environments. This streamlined approach ensures efficient, automated workflows for continuous integration and delivery.

13. How does Amazon ECS benefit AWS DevOps?

Amazon ECS is a scalable container management service that simplifies running Docker containers on EC2 instances through a managed cluster, enhancing application deployment and operation.

14. Why might ECS be preferred over Kubernetes?

ECS offers greater flexibility, scalability, and simplicity in implementation compared to Kubernetes, making it a preferred choice for some deployments.

AWS solution architect interview questions

For solution architects, the focus is on designing AWS solutions that meet specific requirements. This segment tests the ability to create scalable, efficient, and cost-effective systems using AWS, highlighting architectural best practices.

15. What is the role of an AWS solution architect?

AWS Solutions Architects design and oversee applications on AWS, ensuring scalability and optimal performance. They guide developers, system administrators, and customers on utilizing AWS effectively for their business needs and communicate complex concepts to both technical and non-technical stakeholders.

16. What are the key security best practices for AWS EC2?

Essential EC2 security practices include using IAM for access management, restricting access to trusted hosts, minimizing permissions, disabling password-based logins for AMIs, and implementing multi-factor authentication for enhanced security.

17. What is AWS VPC and its purpose?

Amazon VPC enables the deployment of AWS resources within a virtual network that is architecturally similar to a traditional data center network, offering the advantage of AWS's scalable infrastructure.

18. What are the strategies to create a highly available and fault-tolerant AWS architecture for critical web applications?

Building a highly available and fault-tolerant architecture on AWS involves several strategies to reduce the impact of failure and ensure continuous operation. Key principles include:

  • Implementing redundancy across system components to eliminate single points of failure
  • Using load balancing to distribute traffic evenly and ensure optimal performance
  • Setting up automated monitoring for real-time failure detection and response. Systems should be designed for scalability to handle varying loads, with a distributed architecture to enhance fault tolerance.
  • Employing fault isolation, regular backups, and disaster recovery plans are essential for data protection and quick recovery.
  • Additionally, designing for graceful degradation maintains functionality during outages, while continuous testing and deployment practices improve system reliability.

19. Explain how you would choose between Amazon RDS, Amazon DynamoDB, and Amazon Redshift for a data-driven application.

Choosing between Amazon RDS, DynamoDB, and Redshift for a data-driven application depends on your specific needs:

  • Amazon RDS is ideal for applications that require a traditional relational database with standard SQL support, transactions, and complex queries.
  • Amazon DynamoDB suits applications needing a highly scalable, NoSQL database with fast, predictable performance at any scale. It's great for flexible data models and rapid development.
  • Amazon Redshift is best for analytical applications requiring complex queries over large datasets, offering fast query performance by using columnar storage and data warehousing technology.

20. What considerations would you take into account when migrating an existing on-premises application to AWS? Use an example of choice.

When moving a company's customer relationship management (CRM) software from an in-house server setup to Amazon Web Services (AWS), it's essential to follow a strategic framework similar to the one AWS suggests, tailored for this specific scenario:

  • Initial Preparation and Strategy Formation
    • Evaluate the existing CRM setup to identify limitations and areas for improvement.
    • Set clear migration goals, such as achieving better scalability, enhancing data analysis features, or cutting down on maintenance costs.
    • Identify AWS solutions required, like leveraging Amazon EC2 for computing resources and Amazon RDS for managing the database.
  • Assessment and Strategy Planning
    • Catalog CRM components to prioritize which parts to migrate first.
    • Select appropriate migration techniques, for example, moving the CRM database with AWS Database Migration Service (DMS).
    • Plan for a steady network connection during the move, potentially using AWS Direct Connect.
  • Execution and Validation
    • Map out a detailed migration strategy beginning with less critical CRM modules as a trial run.
    • Secure approval from key stakeholders before migrating the main CRM functions, employing AWS services.
    • Test the migrated CRM's performance and security on AWS, making adjustments as needed.
  • Transition to Cloud Operation
    • Switch to fully managing the CRM application in the AWS environment, phasing out old on-premises components.
    • Utilize AWS's suite of monitoring and management tools for continuous oversight and refinement.
    • Apply insights gained from this migration to inform future transitions, considering broader cloud adoption across other applications.

This approach ensures the CRM migration to AWS is aligned with strategic business objectives, maximizing the benefits of cloud computing in terms of scalability, efficiency, and cost savings.

21. Describe how you would use AWS services to implement a microservices architecture.

Implementing a microservice architecture involves breaking down a software application into small, independent services that communicate through APIs. Here’s a concise guide to setting up microservices:

  • Adopt Agile Development: Use agile methodologies to facilitate rapid development and deployment of individual microservices.
  • Embrace API-First Design: Develop APIs for microservices interaction first to ensure clear, consistent communication between services.
  • Leverage CI/CD Practices: Implement continuous integration and continuous delivery (CI/CD) to automate testing and deployment, enhancing development speed and reliability.
  • Incorporate Twelve-Factor App Principles: Apply these principles to create scalable, maintainable services that are easy to deploy on cloud platforms like AWS.
  • Choose the Right Architecture Pattern: Consider API-driven, event-driven, or data streaming patterns based on your application’s needs to optimize communication and data flow between services.
  • Leverage AWS for Deployment: Use AWS services such as container technologies for scalable microservices or serverless computing to reduce operational complexity and focus on building application logic.
  • Implement Serverless Principles: When appropriate, use serverless architectures to eliminate infrastructure management, scale automatically, and pay only for what you use, enhancing system efficiency and cost-effectiveness.
  • Ensure System Resilience: Design microservices for fault tolerance and resilience, using AWS's built-in availability features to maintain service continuity.
  • Focus on Cross-Service Aspects: Address distributed monitoring, logging, tracing, and data consistency to maintain system health and performance.
  • Review with AWS Well-Architected Framework: Use the AWS Well-Architected Tool to evaluate your architecture against AWS’s best practices, ensuring reliability, security, efficiency, and cost-effectiveness.

By carefully considering these points, teams can effectively implement a microservice architecture that is scalable, flexible, and suitable for their specific application needs, all while leveraging AWS’s extensive cloud capabilities.

22. What is the relationship between AWS Glue and AWS Lake Formation?

AWS Lake Formation builds on AWS Glue's infrastructure, incorporating its ETL capabilities, control console, data catalog, and serverless architecture. While AWS Glue focuses on ETL processes, Lake Formation adds features for building, securing, and managing data lakes, enhancing Glue's functions.

For AWS Glue interview questions, it's important to understand how Glue supports Lake Formation. Candidates should be ready to discuss Glue's role in data lake management within AWS, showing their grasp of both services' integration and functionalities in the AWS ecosystem. This demonstrates a deep understanding of how these services collaborate to process and manage data efficiently.

Advanced AWS Interview Questions and Answers

AWS data engineer interview questions

Addressing data engineers, this section dives into AWS services for data handling, including warehousing and real-time processing. It looks at the expertise required to build scalable data pipelines with AWS.

23. Describe the difference between Amazon Redshift, RDS, and S3, and when should each one be used?

  • Amazon S3 is an object storage service that provides scalable and durable storage for any amount of data. It can be used to store raw, unstructured data like log files, CSVs, images, etc.
  • Amazon Redshift is a cloud data warehouse optimized for analytics and business intelligence. It integrates with S3 and can load data stored there to perform complex queries and generate reports.
  • Amazon RDS provides managed relational databases like PostgreSQL, MySQL, etc. It can power transactional applications that need ACID-compliant databases with features like indexing, constraints, etc.

24. Describe a scenario where you would use Amazon Kinesis over AWS Lambda for data processing. What are the key considerations?

Kinesis can be used to handle large amounts of streaming data and allows reading and processing the streams with consumer applications.

Some of the key considerations are illustrated below:

  • Data volume: Kinesis can handle up to megabytes per second of data vs Lambda's limit of 6MB per invocation, which is useful for high throughput streams.
  • Streaming processing: Kinesis consumers can continuously process data in real-time as it arrives vs Lambda's batch invocations, and this helps with low latency processing.
  • Replay capability: Kinesis streams retain data for a configured period, allowing replaying and reprocessing if needed, whereas Lambda not suited for replay.
  • Ordering: Kinesis shards allow ordered processing of related records. Lambda on the other hand may process out of order.
  • Scaling and parallelism: Kinesis shards can scale to handle load. Lambda may need orchestraation.
  • Integration: Kinesis integrates well with other AWS services like Firehose, Redshift, EMR for analytics.

Furthermore, for high-volume, continuous, ordered, and replayable stream processing cases like real-time analytics, Kinesis provides native streaming support compared to Lambda's batch approach.

To learn more about data streaming, our course Streaming Data with AWS Kinesis and Lambda helps users learn how to leverage these technologies to ingest data from millions of sources and analyze them in real-time. This can help better prepare for AWS lambda interview questions.

25. What are the key differences between batch and real-time data processing? When would you choose one approach over the other for a data engineering project?

Batch processing involves collecting data over a period of time and processing it in large chunks or batches. This works well for analyzing historical, less frequent data.

Real-time streaming processing analyzes data continuously as it arrives in small increments. It allows for analyzing fresh, frequently updated data.

For a data engineering project, real-time streaming could be chosen when:

  • You need immediate insights and can't wait for a batch process to run. For example, fraud detection.
  • The data is constantly changing and analysis needs to keep up, like social media monitoring.
  • Low latency is required, like for automated trading systems.

Batch processing may be better when:

  • Historical data needs complex modeling or analysis, like demand forecasting.
  • Data comes from various sources that only provide periodic dumps.
  • Lower processing costs are critical over processing speed.

So real-time is best for rapidly evolving data needing continuous analysis, while batch suits periodically available data requiring historical modeling.

26. What is an operational data store, and how does it complement a data warehouse?

An operational data store (ODS) is a database designed to support real-time business operations and analytics. It acts as an interim platform between transactional systems and the data warehouse.

While a data warehouse contains high-quality data optimized for business intelligence and reporting, an ODS contains up-to-date, subject-oriented, integrated data from multiple sources.

Below are the key features of an ODS:

  • It provides real-time data for operations monitoring and decision-making
  • Integrates live data from multiple sources
  • It is optimized for fast queries and analytics vs long-term storage
  • ODS contains granular, atomic data vs aggregated in warehouse

An ODS and data warehouse are complementary systems. ODS supports real-time operations using current data, while the data warehouse enables strategic reporting and analysis leveraging integrated historical data. When combined, they provide a comprehensive platform for both operational and analytical needs.

AWS Scenario-based Questions

Focusing on practical application, these questions assess problem-solving abilities in realistic scenarios, demanding a comprehensive understanding of how to employ AWS services to tackle complex challenges.

Case Type

Description

Solution

Application migration

A company plans to migrate its legacy application to AWS. The application is data-intensive and requires low-latency access for users across the globe. What AWS services and architecture would you recommend to ensure high availability and low latency?

  • EC2 for compute
  • S3 for storage
  • CloudFront for content delivery
  • Route 53 for DNS routing

Disaster recovery

Your organization wants to implement a disaster recovery plan for its critical AWS workloads with an RPO (Recovery Point Objective) of 5 minutes and an RTO (Recovery Time Objective) of 1 hour. Describe the AWS services you would use to meet these objectives.

  • Backup for regular backups of critical data and systems with a 5-minute recovery points objective (RPO)
  • CloudFormation to define and provision the disaster recovery infrastructure across multiple regions
  • Enable Cross Region Replication in S3 to replicate backups across regions
  • Setup CloudWatch alarms to monitor systems and automatically trigger failover if there are issues

DDos attacks protection

Consider a scenario where you need to design a scalable and secure web application infrastructure on AWS. The application should handle sudden spikes in traffic and protect against DDoS attacks. What AWS services and features would you use in your design?

  • CloudFront and Route 53 for content delivery
  • Auto Scaling group of EC2 across multiple availability zones for scalability
  • Shield for DDoS protection
  • CloudWatch for monitoring
  • Web Application Firewall (WAF) for filtering malicious requests

Real-time data analytics

An IoT startup wants to process and analyze real-time data from thousands of sensors across the globe. The solution needs to be highly scalable and cost-effective. Which AWS services would you use to build this platform, and how would you ensure it scales with demand?

  • Kinesis for real-time data ingestion
  • EC2 and EMR for distributed processing
  • Redshift for analytical queries
  • Auto Scaling to help scale up and scale down resources based on demand

Large-volume data analysis

A financial services company requires a data analytics solution on AWS to process and analyze large volumes of transaction data in real time. The solution must also comply with stringent security and compliance standards. How would you architect this solution using AWS, and what measures would you put in place to ensure security and compliance?

  • Kinesis and Kafka for real-time data ingestion
  • EMR for distributed data processing
  • Redshift for analytical queries
  • CloudTrail and Config to provide compliance monitoring and configuration management
  • Leverage multiple availability zones and IAM policies for access control.

Non-Technical AWS Interview Questions

Besides technical prowess, understanding the broader impact of AWS solutions is vital to a successful interview, and below are a few questions, along with their answers. These answers can be different from one candidate to another, depending on their experience and background.

27. How do you stay updated with AWS and cloud technology trends?

  • Expected from candidate: The interviewer wants to know about your commitment to continuous learning and how they keep your skills relevant. They are looking for specific resources or practices they use to stay informed.
  • Example answer: "I stay updated by reading AWS official blogs and participating in community forums like the AWS subreddit. I also attend local AWS user group meetups and webinars. These activities help me stay informed about the latest AWS features and best practices."

28. Describe a time when you had to explain a complex AWS concept to someone without a technical background. How did you go about it?

  • Expected from candidate: This question assesses your communication skills and ability to simplify complex information. The interviewer is looking for evidence of your teaching ability and patience.
  • Example answer: "In my previous role, I had to explain cloud storage benefits to our non-technical stakeholders. I used the analogy of storing files in a cloud drive versus a physical hard drive, highlighting ease of access and security. This helped them understand the concept without getting into the technicalities."

29. What motivates you to work in the cloud computing industry, specifically with AWS?

  • Expected from candidate: The interviewer wants to gauge your passion for the field and understand what drives you. They're looking for genuine motivations that align with the role and company values.
  • Example answer: "What excites me about cloud computing, especially AWS, is its transformative power in scaling businesses and driving innovation. The constant evolution of AWS services motivates me to solve new challenges and contribute to impactful projects."

30. Can you describe a challenging project you managed and how you ensured its success?

  • Expected from candidate: Here, the focus is on your project management and problem-solving skills. The interviewer is interested in your approach to overcoming obstacles and driving projects to completion.
  • Example answer: "In a previous project, we faced significant delays due to resource constraints. I prioritized tasks based on impact, negotiated for additional resources, and kept clear communication with the team and stakeholders. This approach helped us meet our project milestones and ultimately deliver on time."

31. How do you handle tight deadlines when multiple projects are demanding your attention?

  • Expected from candidate: This question tests your time management and prioritization skills. The interviewer wants to know how you manage stress and workload effectively.
  • Example answer: "I use a combination of prioritization and delegation. I assess each project's urgency and impact, prioritize accordingly, and delegate tasks when appropriate. I also communicate regularly with stakeholders about progress and any adjustments needed to meet deadlines."

32. What do you think sets AWS apart from other cloud service providers?

  • Expected from candidate: The interviewer is looking for your understanding of AWS's unique value proposition. The goal is to see that you have a good grasp of what makes AWS a leader in the cloud industry.
  • Example answer: "AWS sets itself apart through its extensive global infrastructure, which offers unmatched scalability and reliability. Additionally, AWS's commitment to innovation, with a broad and deep range of services, allows for more flexible and tailored cloud solutions compared to its competitors."

·         https://www.geeksforgeeks.org/amazon-interview-experience-for-sde-1-amazon-wow-2020/?ref=asr13

·         live interview questions

plied through HackerEarth coding challenge for 6 months+ experienced

Round 1(Hackerearth Round – Coding Challenge): Two questions were asked to solved.

  1. Based on the priority queue
  2. Based on the map

I solved both questions within 40 minutes.

After few weeks got a call for scheduling interview rounds.

Round 2 (Technical Interview): Two questions were asked, along with the behavioural questions.

  1. First question was based on the binary search
  2. Second was a DP problem with binary tree.

I was able to solved both the problems, some questions were asked from the previous experience.

Round 3(Technical Interview – Taken by a SDE-2): Two questions were asked, along with the behavioural questions.

  1. First question was similar to the problem https://www.geeksforgeeks.org/next-greater-element/
  2. Second question was a DP problem.

The questions were followed by questions from previous experience and behavioural questions.

Round 4(Technical Interview – Taken by a Senior SDM): Two questions were asked

  1. First question was OOPs concept based question, I was given with some features which needed to add in the amazon alexa, and asked to implement the generic and maintainable code, using different oops concept.
  2. Second question was similar to https://www.geeksforgeeks.org/the-celebrity-problem/, They have provided a single array of people and a function knows(a,b) which return true if a knows b otherwise false.

Some more questions from resume, past experiences, college projects and behavioural based  were asked

Round 5(Bar Raiser Round – Taken by SDE-2): Two questions along with some resume based and behavioural questions.

  1. First question was based on LRU based, the question was framed in a way that you first need to understand the question itself, and ask different questions.
  2. Second question was similar to the problem https://www.geeksforgeeks.org/find-top-k-or-most-frequent-numbers-in-a-stream/.

Tips: 

  1. Thoroughly go through amazon leadership principals as most of the behavioural questions were asked from that only.
  2. Be vocal throughout your interview process, incase if you are going in the wrong direction interviewer might help you.
  3. Ask questions to clarify the questions that were asked
  4. Always be prepared to dry run the code you have written as interviewer might not code in same language as you do.

After giving all the rounds, a few days later got informed that i was selected.

 

Three 90 Challenge is back on popular demand! After processing refunds worth INR 1CR+, we are back with the offer if you missed it the first time. Get 90% course fee refund in 90 days. Avail now!

Are you feeling lost in OS, DBMS, CN, SQL, and DSA chaos? Our Complete Interview Preparation Course is your solution! Trusted by over 100,000+ Geeks, it covers all essential topics, ensuring you're well-prepared for technical challenges. Join the ranks of successful candidates and unlock your placement potential. Enroll now and start your journey to a successful career!

 

Amazon CloudTrail:

https://www.youtube.com/watch?v=g6st17puXxE

 

1. Q: You notice suspicious activity in your AWS account, like unauthorized IAM user creation. How would you use CloudTrail to investigate?

2. Q: You need to comply with a regulation that requires logging all API calls to your S3 buckets. How would you implement this with CloudTrail?

3. Q: Your CloudTrail logs are growing rapidly, leading to increased storage costs. How can you optimize log storage while maintaining necessary information?

 

Amazon CloudWatch:

 

1. Q: Your web application experiencing performance issues during peak hours. How would you use CloudWatch to diagnose the bottleneck?

2. Q: You need to send automated notifications when specific CloudWatch alarms are triggered. How would you configure this?

3. Q: You want to analyze historical CloudWatch data for trends and anomalies. How would you achieve this?

 

Amazon Lambda:

 

1. Q: You have a new API endpoint that needs to process image uploads asynchronously. How would you implement this using Lambda?

2. Q: Your Lambda function occasionally experiences cold starts, slowing down initial processing. How can you optimize its performance?

3. Q: You need to monitor the performance and stability of your Lambda functions. How would you utilize CloudWatch?

 

Amazon ECS:

 

1. Q: You're managing a microservices architecture in ECS. How would you handle scaling different services based on individual resource needs?

2. Q: You encounter high deployment failure rates when updating your ECS services. How would you improve reliability and rollback if necessary?

3. Q: You need to monitor the health and performance of your ECS clusters and tasks. How would you utilize CloudWatch and other tools?

 

Amazon DynamoDB:

 

1. Q: You plan to store user data for your mobile app in DynamoDB. How would you design the primary key for efficient queries?

2. Q: You experience slow DynamoDB read speeds for certain queries. How would you diagnose and improve performance?

3. Q: You need to backup and restore your DynamoDB data regularly. How would you achieve this?

 

Circle CI:

 

1. Q: Your team develops a microservices architecture on AWS. How would you integrate Circle CI with your development workflow to automate deployments and tests?

2. Q: You encounter intermittent build failures in Circle CI that make deployment unreliable. How would you troubleshoot the issue?

3. Q: You need to track and visualize your team's build history and performance in Circle CI. How would you achieve this?

 

Elastic Block Store (EBS):

 

1. Q: You need to choose the right EBS volume type for your production database instance. What factors would you consider and which type would you recommend?

2. Q: You encounter frequent EBS volume snapshots exceeding your billing budget. How can you optimize snapshot costs?

3. Q: Your application experiences slow disk performance during peak hours. How can you diagnose and improve EBS performance?

 

Elastic Kubernetes Service (EKS):

 

1. Q: You're planning to migrate containerized applications to EKS. What considerations should you have for security and access control?

2. Q: You encounter pod deployment failures in your EKS cluster. How would you approach troubleshooting and identifying the root cause?

3. Q: You want to scale your EKS cluster based on application load. How would you implement autoscaling?

AWS VPC Interview Questions With Sample Answers

Below are some AWS VPC interview questions with example answers that may inspire you as you prepare your own:

1. What is a VPC?

At the start of your interview, a hiring manager may focus on more general questions about your past experiences and skills. The hiring manager may ask this question to assess your ability to provide simple explanations for complex ideas. In answer to this question, focus on providing the definition and on highlighting your knowledge of the subject. You can prepare your answer by considering how you might explain the term to someone with no prior experience.Example: The virtual private cloud, or VPC, is an online space within Amazon's cloud. In this space, you can use AWS resources.Related: 10 AWS Interview Questions And Answers

2. What are subnets when working with a VPC?

This is another simple question that a hiring manager may ask to assess your AWS VPC knowledge. This question helps them learn more about your understanding of the service and your ability to provide clear explanations. To answer this question, focus on your knowledge of subnets. Try and remain clear and concise.Example: A subnet is a segment of the unique internet protocol (IP) address range for the VPC. This is where you can locate groupings of resources.

3. What is a NAT device?

A network address translation (NAT) device is a component that allows you to conserve IP addresses and add a layer of protection. A hiring manager may ask you about this to assess your ability to protect and maintain the server. In your answer, define NAT devices and explain why they are important.Example: NAT devices function by blocking inbound traffic coming from the Internet and enabling outbound traffic to other services. When traffic leaves the VPC, the NAT device's address replaces the IP address. NAT devices are important because they provide additional security and prevent the depletion of IP addresses.Related: What Is Network Address Translation? (Types And FAQs)

4. What is the difference between stateful and stateless filtering?

Filtering is important for the VPC, as it allows you to assess and restrict certain types of traffic. A hiring manager may ask you about these two types of filtering to determine what you know about AWS VPC security. To answer this question, define both types of filtering and explain their differences.Example: Stateful filtering is a process that assesses the origin of the request. It triggers a replay to the originating device. Stateless filtering assesses the destination IP and source and does not examine the request or the replay. The primary difference between these filtering types is that security groups use stateful filtering while network access control lists (ACLs) use stateless filtering.Related: 8 Popular AWS Certifications To Consider Pursuing

5. What are the advantages of using a default VPC?

A default VPC is a network that the system creates automatically when a user utilises resources. It is an individual network in the cloud, and a hiring manager may ask you about it because you may use it frequently as a cloud architect or data professional. In answer to the question, you can provide a brief definition of a default VPC and then detail its advantages.Example: When you use Amazon's Elastic Compute Cloud (EC2) resources for the first time, the system automatically creates a default VPC in the AWS cloud account. The default VPC is useful as it is where you can access high-level features. In the default VPC, you can use different IPs and access network interfaces without creating a new VPC.Related: AWS Cloud Engineer Interview Questions And Sample Answers

6. What are the internet gateways in VPC?

Internet gateways are important for the communication between the VPC and the Internet. A hiring manager may ask you about internet gateways to assess your knowledge and evaluate your ability to regulate traffic within the VPC. To answer this question, briefly define internet gateways and describe their functions.Example: Internet gateways are components within the system that allow the VPC to connect to the Internet. They are horizontally scaled, and only one gateway can connect to each VPC. NAT devices can translate IP addresses when the system uses internet gateways.

7. What is a network ACL in a VPC?

The network ACL provides rules regarding which users and systems can gain access to the VPC. Network ACLs are essential to maintaining security and order. A hiring manager may ask you about network ACLs to better understand your VPC knowledge and assess your ability to maintain the system.Example: ACL stands for the Access Control List and the network ACL functions as a list of authorised entities that can gain access to the system. The network ACL regulates inbound and outbound traffic, and the VPC generates the ACL automatically. You can modify the automatically generated ACL to include specific rules.

8. What are the security groups in a VPC?

Organisations often store important information in VPCs, so it is important that the VPC's security is functional. If you are a professional who works with VPCs, a hiring manager may ask this question to ensure that you know how to protect private data. To answer this question, define a security group and its location.Example: Security groups are components that manage traffic within the VPC. They can act as firewalls by using specific rules to control the traffic. There are security groups in the VPC and the EC2 sections of the AWS console.Related: What Is An AWS Solutions Architect? A Complete Guide

9. What is an elastic IP address?

Questions about IP addresses are very common in AWS VPC interviews. IP addresses are specific strings of characters that identify devices that communicate over the Internet. A hiring manager may ask you this question to determine whether you know the differences between the various types of IP addresses. To answer this question, define an elastic IP address and explain how it works.Example: An elastic IP address is much like a public IP address, except that the elastic address connects to the AWS account until a user terminates it. You can detach and reattach elastic addresses to different instances within the VPC.

10. How does ELB affect a VPC?

Elastic load balancing (ELB) is a service that balances loads for AWS deployments. This process allows the computer to accomplish tasks faster. A hiring manager may ask you this question to determine your knowledge of security and scalability for VPCs. To answer this question, you can list the different types of ELB and explain how they work within the VPC.Example: Elastic load balancing, or ELB, is a process that distributes the incoming work to help the computer complete it faster. The three types of ELB are network, application and classic. You can use application and network ELB forms to route the traffic to specific targets within your VPC.

11. What are the limits for VPCs, gateways and subnets?

When using AWS VPC, you can create components, such as subnets and gateways. A hiring manager may ask about the system's limits to assess your level of experience using it. This question can also assess your knowledge about connectivity within the VPC. To answer, you can provide the specific limit for each component in the question.Example: Limits within the AWS VPC system exist to maintain the system's operating quality. Limits vary depending on the component. You can create five VPCs, five elastic IP addresses and 50 customer gateways per region and 200 subnets per VPC.Related: What Is An AWS Certification? (With Types And Benefits)

12. What is a public IP address?

A hiring manager may ask this question to see how much you know about IP addresses. To answer, you can define a public IP address. It is also helpful to compare it to the other types of addresses.Example: A public IP address is an easily accessible internet identifier. When you first launch a VPC, the system automatically creates and assigns one. Each time you stop and restart instances, you terminate them. A public address differs from a private address because when you stop an instance, you terminate the address.

 

AWS Scenario-Based Interview Questions And Answers

Here are some AWS scenario-based interview questions that interviewers may ask during the technical round:

1. If I have some private servers on my premises and some of my workloads get distributed on the public cloud, what do you call this architecture?

Hybrid cloud architecture allows for sharing apps and data while maintaining the ability to be managed as a unified information technology (IT) architecture. It combines public and private clouds across a wide area network or broadband connection. For this question, you can mention the type of architecture and the reason for its usage.

Example: An example of this kind of architecture is a hybrid cloud
because we utilise both your on-site servers, the private and public cloud. It is better if your private and public clouds are all on the same network, as it makes this hybrid architecture easier to use. Hybrid architecture places your public cloud servers in a virtual private cloud and uses a virtual private network (VPN) to connect it to your on-premises systems.
Related: 33 Cloud Architect Interview Questions (With Sample Answers)

 

-------------------------------------------------------------------------------------------------------------------------------------

 

2. You have a video transcoding application. A queue processes these videos. When video processing gets interrupted in one instance, it resumes in another. Currently, there is a massive backlog of videos to be processed. More instances are required for this, but only until your backlog reduces. Which scenario can be the most efficient?

Ans :  With On-Demand Instances

Online video transcoding involves changing a file's format to one more efficiently compressed so viewers can stream content at the best quality without buffering.

By using With On-Demand Instances, there are no long-term commitments and you can pay for the computing capability by the hour or second. This eliminates high fixed costs and complicated tasks related to planning, purchasing and maintaining hardware compatibility.Example: 

You can use an on-demand instance. The workload requires immediate completion, which makes it an urgent requirement. In addition, once your backlog gets cleared, you rarely require instances, so you may be unable to use reserved instances. Finally, because the work is urgent, you cannot halt the work on your instance just because the spot price increases, so you may not be able to use spot instances. As a result, on-demand instances can be the best option here.

-------------------------------------------------------------------------------------------------------------------------------------

 

3. Suppose you have an AWS application that requires 24x7 availability and
 can be down for a maximum of 15 minutes.
How can you ensure backing up your Elastic Block Store (EBS) volume database?

The text-based command line used to access the operating system is a command-line interface (CLI). Application programming interface (API) provides programmers with an effective technique to build their software programs and to make them simpler and easier to understand. You can answer how to use the two for cloud computing administration.

Example: The important procedures, in this case, can have automated backups because they operate without requiring manual intervention. API and CLI are essential for automating the backup process through scripts. The best option is to prepare for a timely backup of the EC2 instances in EBS. In the event of a disaster or downtime, you can recover the database instance using the EBS snapshot to keep on Amazon S3.Related: 10 AWS Interview Questions And Answers

-------------------------------------------------------------------------------------------------------------------------------------

 

4. When you launch instances in a cluster placement group, what network performance parameters can you expect?

You can include peered virtual private clouds (VPCs) in the same region in a cluster placement group. Instances in the same cluster placement group share a high-bisection bandwidth network segment and have a higher TCP/IP traffic throughout the limit per flow.Example: If an instance gets launched in a placement group, you can anticipate up to ten Gbps in single-flow and 20 Gbps in multi-flow or full duplex network performance. The placement group's network traffic may not exceed five Gbps (full duplex).Related: 12 AWS VPC Interview Questions (With Useful Sample Answers)

-------------------------------------------------------------------------------------------------------------------------------------

 

5. How can you prepare an instance to serve traffic by
configuring it with the application and its dependencies?

You can use lifecycle hooks to manage how your Amazon EC2 instances start and terminate as you scale in and out. When an instance is launching, you might download and install the software. When an instance is terminating, you might archive instance log files in Amazon Simple Storage Service (S3).Example: Using lifecycle hooks can help achieve this. They are effective because they enable you to pause the formation or termination of an instance to look inside and conduct customised operations like configuring the instance, obtaining the necessary files and carrying out any other tasks. There may be several lifecycle hooks for each auto-scaling group.

-------------------------------------------------------------------------------------------------------------------------------------

 

6. How can you safeguard EC2 instances running on a VPC?

You can establish a virtual network in your own logically isolated area of the AWS cloud using Amazon Virtual Private Cloud (Amazon VPC), also known as a virtual private cloud or VPC. In the subnets of your VPC, you can establish AWS resources, such as Amazon EC2 instances.Example: You may protect EC2 instances operating in a VPC by using AWS Security groups linked with those instances to provide security at the protocol and port access levels. You can set up secured access for the EC2 instance for inbound and outbound traffic. Like a firewall, AWS security groups contain rules that filter the traffic flowing into and out of an EC2 instance and prevent any unauthorised access to EC2 instances.

-------------------------------------------------------------------------------------------------------------------------------------

 

 

7. Where does an AMI fit while designing an architecture for a solution?

You can launch an instance using an Amazon Machine Image (AMI), a supported and maintained image offered by AWS. When you launch an instance, you may specify an AMI. If you require numerous instances with the same configuration, you can launch them all from a single AMI.

Example: An instance comes from an AMI, which acts as a blueprint for virtual machines. When launching an instance, AWS provides pre-baked AMIs that you can select. Some of these AMIs are not free; users can purchase them through the AWS Marketplace. To save space on AWS, you can also choose to design your custom AMI. You can modify your AMI if you do not require a particular collection of software on your installation. It reduces costs as you are getting rid of unnecessary items.

-------------------------------------------------------------------------------------------------------------------------------------

 

 

8. You are configuring an Amazon S3 bucket to serve static assets
for your public-facing web application.
Which method can ensure setting all objects you uploaded to the bucket to public read?

You may securely manage access to AWS resources with the help of the web service AWS Identity and Access Management (IAM). IAM enables you to control who has access to resources and gives authority to use them.Example: Setting the policy for the entire bucket is preferable to altering each object. As this is a website, all objects, by default, are accessible to the public. IAM is also useful to provide more specific permissions.Related: 8 Popular AWS Certifications To Consider Pursuing

-------------------------------------------------------------------------------------------------------------------------------------

 

9. A company wants to use AWS to deploy a two-tier web application. Complex query processing and table joins are required by the application. But the company has limited resources and requires high availability. Based on the requirements, what is the best configuration that the company can choose?

The first tier of two-tier architecture is for client placement. On the same server machine, which is a part of the second tier, are the database server and the web application server. The web application's second tier serves data and performs business logic operations.Example: I believe DynamoDB addresses the fundamental issues with database scalability, management, stability and performance, though it lacks the RDBMS's features. DynamoDB does not support query processing or complex transactions. For this kind of functionality, you can run a relational engine on Amazon RDS or EC2.

 

No comments:

Post a Comment

Bank dm data model diagram and sql

  -- MySQL Script generated by MySQL Workbench -- Thu May  8 12:17:48 2025 -- Model: New Model    Version: 1.0 -- MySQL Workbench Forward En...