AWS Tutorial For Beginners | AWS Certified Solutions Architect | AWS Training | Edureka
null

Click
Use
to move to a smaller summary and to move to a larger one
Introduction and Problems with Traditional Server Setup
- Traditional server setup required purchasing a lot of servers, including extra ones for peak times.
- Monitoring and maintaining these servers led to conflicts with business goals.
- The setup was expensive, and servers were idle most of the time, resulting in a bad investment.
Advantages and Parameters to Consider When Choosing a Cloud Provider
- Renting servers instead of buying them cuts down costs significantly.
- Cloud computing allows for easy scalability, allowing businesses to scale up or down according to their needs.
- Cloud providers manage and update servers, freeing up businesses to focus on their applications.
- Choosing the right cloud provider is crucial for success, as new players may not be equipped to handle potential issues.
- When choosing a cloud provider, consider their maturity, server capacity, and associations with successful companies like Netflix.
- Cloud computing is the use of remote servers for storing, managing, and processing data.
Why AWS is the Preferred Cloud Provider
- AWS has a global cloud computing market share of 31%, leading all its competitors.
- AWS alone has a server capacity 6 times that of all its competitors combined.
- AWS offers flexible pricing, charging users based on the number of hours they use the service.
Overview of AWS Domains
- AWS offers a secure Cloud Services platform.
- It provides services in Compute, Database, Content Delivery, and other domains.
- The Compute domain includes EC2, an Elastic Compute Cloud that allows for configuration and hosting of websites.
- The Migration domain handles transferring data to and from the AWS Infrastructure.
- The Migration Service and Snow Ball are used for large-scale data transfer, including physical devices for data transportation.
Overview of AWS Domains and Services
- Snowball offers migration services for transferring large amounts of data physically to AWS Infrastructure.
- The Security and Identity Compliance domain includes services like IAM, which is used to authenticate users and define user rights.
- The Storage Domain includes services like S3, a file system for storing and accessing files.
- The Networking and Content Delivery domain includes services like Route 53, a domain name system for redirecting traffic to web applications.
- The Messaging Domain includes services like SES, a simple email server for sending bulk emails and managing customer replies.
- The Database domain includes services for managing and storing data in a structured manner.
AWS Compute Domain Services
- EC2: A raw server that can be configured to be anything, such as a web server or a work environment. It offers resizeability and the ability to launch multiple servers of the same configuration.
- Lambda: An advanced version of EC2 that allows you to run code without provisioning or managing servers. It automatically scales based on incoming requests.
- Elastic Beanstalk: Another advanced version of EC2 that makes it easy to deploy and manage applications. It handles infrastructure and resource provisioning, allowing developers to focus on writing code.
- Batch: A service that helps you run batch computing workloads on the AWS Cloud. It automatically provisions and scales the resources needed to run your batch jobs efficiently.
- Auto Scaling: A service that automatically adjusts the number of EC2 instances in a fleet based on predefined conditions, ensuring that you have the right amount of resources to handle your workload.
AWS Services Overview
- EC2 is a flexible service that allows users to resize and configure their instances.
- AWS Lambda is used for executing background tasks and responds to events or triggers.
- Elastic Beanstalk is an automated version of EC2 and is used for hosting applications.
Elastic Beanstalk and Elastic Load Balancer
- Elastic Beanstalk simplifies the process of deploying an application or website by handling the installation of configuration files and providing a ready-to-use environment.
- If an environment is listed in Elastic Beanstalk, it is recommended to use Elastic Beanstalk for hosting the application. Otherwise, EC2 can be used.
- With Elastic Beanstalk, configuration is easy as the necessary software and firewall settings are automatically managed.
- Elastic Load Balancer is used to evenly distribute traffic among multiple instances to ensure efficient workload distribution.
- Using Elastic Load Balancer prevents all traffic from being directed to a single instance, which can lead to inefficient resource usage.
Elastic Load Balancer and AutoScaling
- Elastic Load Balancer distributes work load equally among instances to ensure efficient and consistent performance.
- Elastic Load Balancer is used to ensure that all users experience the same response time on a website hosted in AWS.
- AutoScaling is a service used to automatically scale up or down the number of instances based on pre-defined metrics.
- AutoScaling and Elastic Load Balancer go hand-in-hand and should be used together.
- With AutoScaling, you can set metrics to launch new servers when CPU usage goes beyond a certain threshold.
- AutoScaling can also decommission servers when CPU usage goes below a certain threshold to scale down.
- AutoScaling and Elastic Load Balancer work together to distribute work load among instances and maintain consistent performance.
- To deploy a new EC2 instance, you can access the AWS Console and navigate to the EC2 dashboard.
Launching an EC2 Instance with Windows
- Select the "Windows" option.
- Choose the t2.micro instance type.
- Launch only one instance for the demo.
- Choose the default 30GB storage.
- Add tags to name your instance.
- Configure security group to control inbound and outbound traffic.
- Review all the settings and click on Launch.
- Create a new key pair or use an existing one.
- Download the private key file and keep it safe.
- Launch the instance.
- Check the EC2 dashboard to see if the instance is listed.
Connecting to an EC2 server and accessing the desktop.
- 3 running instances are visible after launching a new EC2 server.
- Clicking on the instance reveals details such as instance name, type, key pair, timestamp, and security group.
- The public IP address is used to connect to the instance through a remote desktop connection.
- The IP address is copied from the instance details.
- The default username is Administrator and the password is decrypted using the uploaded PEM file.
- The remote desktop connection is opened and the IP address is pasted.
- The username and decrypted password are entered to establish the connection.
- The desktop of the EC2 server is accessed and can be configured and installed with software.
AWS Storage Domain Services
- S3 (Simple Storage Service) is an object-oriented file system where files are treated as objects stored in buckets.
- Cloudfront is a content delivery network that caches websites to locations near users, reducing latency.
- Edge locations are the web servers used to cache websites near users in the Content Delivery Network.
- Elastic Block Storage (EBS) acts as a hard drive for EC2 instances, storing the operating system or software.
Overview of AWS Services - EC2, EBS, Amazon Glacier, and Snowball.
- EC2 (Elastic Compute Cloud) is a service that provides virtual servers in the cloud.
- EBS (Elastic Block Store) is a block storage service that can be used with EC2 instances.
- EBS volumes can only be connected to one EC2 instance at a time, but an EC2 instance can be connected to multiple EBS volumes.
- Amazon Glacier is a data archiving service that uses magnetic tapes for cost-effective storage.
- Glacier is ideal for storing data that is not frequently accessed, such as old medical records.
- Snowball is a physical device used to transfer large amounts of data to and from AWS Infrastructure.
- Snowball is useful when transferring petabytes of data that would take a long time over the internet.
- Snowball devices are shipped by AWS to transfer data from a data center to AWS Infrastructure.
AWS Snowball and Storage Gateway
- Snowball is used to transfer large amounts of data offline, particularly for petabyte-scale transfers.
- With Snowball, the data is transferred onto a physical device, which is then shipped back to AWS for data upload.
- Using Snowball can save time and costs compared to transferring data over the internet.
- Storage Gateway is a service used to store snapshots of databases between a datacenter and the cloud or within a datacenter.
- It sits between database servers and application servers, continuously taking snapshots of databases and storing them on S3.
- If a database server fails, Storage Gateway can restore the server using the relevant snapshot.
- Storage Gateway can also be used between EC2 and RDS instances in the AWS infrastructure.
- RDS is a relational database management service, not a database itself.
- It manages relational databases like MySQL, Oracle, MariaDB, PostgreSQL, Microsoft SQL server, and Amazon Aurora.
Comparison between RDS, Amazon Aurora, and DynamoDB
- RDS is a management service for relational databases, while DynamoDB is a management service for non-relational (NoSQL) databases.
- Amazon Aurora, included in RDS, is a relational database that is based on MySQL but claims to be 5 times faster.
- With Amazon Aurora, you can use the same code as MySQL without any changes, but experience a performance boost.
- RDS automatically handles tasks like updating security patches and the database engine.
- DynamoDB is a NoSQL database that also gets automatically managed, with no manual intervention required.
- DynamoDB scales automatically without the need to specify storage requirements, growing or shrinking as needed.
Overview of DynamoDB, ElastiCache, and RedShift
- DynamoDB is a database management service for non-relational databases, designed for storing unstructured data.
- It automatically scales storage space based on the data requirements, eliminating the need for manual intervention.
- ElastiCache is a caching service that helps reduce the overhead on databases by storing frequently accessed query results, improving performance.
- It sets up, manages, and scales a distributed environment in the cloud.
- RedShift is a data warehouse service that allows analysis of data stored in databases like RDS and DynamoDB.
- It is a petabyte scale data warehouse service and serves as an analytic tool.
- It can be used to analyze data from RDS and DynamoDB.
- All three services offer specific functionalities and capabilities for different database needs.
Launching an RDS Instance in AWS
- To launch an RDS instance, go to the AWS Management Console and navigate to the RDS dashboard.
- Click on "Instances" and then click on "Launch DB Instance".
- Select the database you want to manage, such as MySQL, and click on "Select".
- Choose the environment you want to launch the instance in, either prod or dev/test.
- Configure the instance settings, such as the DB Instance class and multi-availability zone deployment.
- Select the storage type based on your use case, such as SSD or Provisioned IOPS.
- Provide a name for the database and set the master username and password.
- Choose the VPC and VPC Security Group for the instance.
- Select the desired database version and other settings.
- Review the configurations and click on "Launch DB Instance" to create the RDS instance.
Connecting to RDS Instance using MySQL Command Line
- Launch the RDS DB instance and wait for it to be created.
- Open the command prompt and navigate to the bin directory of MySQL installation.
- Use the command "mysql -h [endpoint] -P [port] -u [username] -p" to connect to the RDS instance.
- Copy the endpoint from the AWS console and paste it as the value for the -h parameter.
- Delete "3306" from the endpoint and enter "-P" followed by the port number.
- Specify the username using "-u" followed by the username.
- Hit Enter and enter the password when prompted.
- If successful, the connection to the RDS instance will be established.
Summary of Database Services and Networking Services in AWS.
- RDS is a relational database management service.
- Aurora is a database built by Amazon based on MySQL, which performs 5 times faster than MySQL.
- DynamoDB is a database management service for noSQL databases.
- ElastiCache is a cache-in environment used to cache results and reduce latency and database overhead.
- RedShift is a data warehouse service used for data analysis.
- VPC is a virtual private cloud that allows AWS resources to be visible and interact with each other.
- VPC can also be used to connect private data centers to AWS infrastructure via a VPN connection.
- Direct Connect is a leased line that provides a direct connection to the AWS infrastructure, bypassing the internet.
- Route 53 is a domain name system that converts URLs to IP addresses.
AWS Management Domain Services
- CloudWatch is a monitoring tool used to monitor AWS resources and set alarms for specific conditions.
- CloudFormation is used to templatize AWS infrastructure for easy deployment in different environments.
- CloudTrail is a logging service that logs API requests and responses for troubleshooting purposes.
Overview of AWS Services
- CloudTrail: Allows users to track and troubleshoot errors by storing logs in AWS S3.
- CLI (Command Line Interface): A command line replacement for the GUI interface in AWS, used for deploying instances.
- OpsWorks: A configuration management tool consisting of stacks and layers, used to manage and change settings across multiple AWS services.
- Trusted Advisor: Acts as a personal assistant, providing advice on monthly expenditure, best practices, and IAM policies.
AWS Security and Application Domains
- The Security Domain includes 2 services: IAM and Key Management Service (KMS).
- IAM allows users to have granular permissions and control over AWS infrastructure.
- KMS provides public and private keys for authentication to AWS instances.
- Losing private keys means losing access to AWS resources using that key.
- The Application Domain includes 3 services: Simple Email Service (SES) and Simple Queue Service (SQS).
- SES enables easy email sending and automated replies.
- SQS acts as a buffer for tasks in a processing application, allowing multiple servers to work on different tasks.
Summary of AWS services and pricing models.
- AWS provides services such as SES, SQS, and SNS.
- SES is used for sending emails, SQS is a message queuing service, and SNS sends notifications to other AWS services.
- SQS works on a priority basis, with tasks executed in a first-in, first-out order.
- SNS sends notifications to SQS and SES for tasks and email notifications, respectively.
- AWS pricing is based on a "pay as you go" model, where users pay for the services they use.
- This model allows for flexibility and cost savings, as users only pay for the resources they actually utilize.
- Another pricing model offered by AWS is "pay less by using more," where users can receive volume discounts by using more resources.
AWS Pricing Models
- Pricing for S3 storage is based on the amount of storage used, with lower rates for higher usage.
- The "pay less by using more" concept means that as usage of S3 storage increases, the pricing per GB per month decreases.
- The "save when you reserve" model allows users to reserve instances for one or three years, resulting in cost savings of up to 75%.
- Reserving instances is beneficial for long-term usage scenarios, while on-demand instances are more suitable for short-term or uncertain usage.
- Hosting a website on AWS infrastructure involves setting up a website address, a file server for uploading and storing images, and a database for managing image metadata.
- The website should be able to auto scale and be highly available to handle traffic and ensure reliability.
Architecting the AWS Resources for a Website
- The architecture discussed involves the use of AWS resources.
- Route 53 will be used for the domain name system.
- Elastic Beanstalk will host the PHP application.
- Under the hood, Elastic Beanstalk uses EC2 instances, Auto Scaling, and Load Balancer.
- RDS will be used for the database.
- S3 will be used as the file system.
- IAM will provide access keys for authentication to S3.
- The architecture allows for uploading files to S3 and storing paths in a MySQL database.
- The website can be accessed and files can be uploaded successfully.
Steps to upload a website on AWS Elastic Beanstalk and migrate a database to RDS.
- Check if the image file is listed in the MySQL database.
- Access the S3 service in the AWS Storage domain.
- Confirm the presence of the image file in the S3 bucket.
- Compare the file name in the database and the S3 bucket.
- Verify that the file in the S3 bucket is the same as the uploaded file.
- Test the website to ensure it is working properly.
- Launch an Elastic Beanstalk environment in the AWS Compute domain.
- Select the appropriate platform (e.g., PHP).
- Enable load balancing and auto scaling.
- Give the environment a name and check its availability.
- Create the environment inside a VPC.
- Select the instance type and key pair.
- Configure the VPC to match the one used for the RDS instance.
- Check if the security group is the same as the RDS instance.
Steps to migrate the database to AWS infrastructure
- Select all the instances and click on Next.
- Review the configuration and launch the application.
- Check if the file has been uploaded to the S3 bucket and the database.
- Confirm that the uploaded file in the S3 bucket is the same as the one uploaded.
- Take a backup of the local database using the mysqldump command.
- Connect to the RDS instance and migrate the backup file to the RDS instance.
Migrating a MySQL database to RDS and updating code
- Create a database in RDS using the command "create database <database_name>".
- Check if the RDS database is empty using the command "show tables".
- Import the MySQL database file to RDS using the command "mysql -u <username> -p < <file_name> -h <RDS_endpoint> <database_name>".
- Enter the password when prompted and wait for the file to be uploaded.
- Connect to RDS and check if the tables and data have been imported successfully using the commands "show tables" and "select * from <table_name>".
- Update the code files to connect to the RDS database instead of the local MySQL database.
- Change the host name, username, and password in the code to match the RDS details.
- Save the code files and test if the connection to RDS is working.
- Upload files to the RDS database using the updated code.
Migrating and Deploying Code to AWS Infrastructure
- Uploading image to S3 and confirming successful upload.
- Checking if image is visible on homepage and in S3.
- Checking if records are added to RDS and if code can interact with RDS.
- Verifying that local host MySQL is not being updated.
- Successfully migrating database to AWS infrastructure.
- Interacting with AWS infrastructure using code.
- Uploading code to Elastic Beanstalk by zipping the code files.
- Monitoring Elastic Beanstalk deployment progress.
- Accessing website using the provided URL.
- Uploading a file to confirm successful migration to Elastic Beanstalk.
- Verifying that database and S3 bucket have been updated.
- Obtaining a free domain name from "my.dot.tk".
- Configuring Route 53 with the custom name servers provided by "my.dot.tk".
Configuring Route 53 and Connecting to Instances
- Create a hosted zone and enter the domain name.
- Copy the nameservers provided by Route 53 and add them to the domain's nameservers section.
- Configure Route 53 to connect to instances by creating a record set.
- Create a record set for the domain without www and select the Elastic Beanstalk Environment as the Alias.
- Create another record set with www in it and select the same Alias.
- Check if the URLs are working and upload a file to test the functionality.
- Recap of the steps taken: launching Elastic Beanstalk Environment, configuring Route 53, setting up RDS, and accessing S3 via IAM.
- Positive feedback from Sebastian.
Summary of the Discussion and Assignments
- The session covered the topics of cloud computing and AWS.
- Different domains in AWS were discussed, as well as the various AWS services.
- The session also included information on AWS pricing and a practical use-case of migrating an application to AWS.
- Participants were encouraged to ask for further clarification if needed.
- Everyone expressed clarity and thanked the instructor.
Overview of AWS Cloud Computing and Services
- Traditional server setup was expensive and inefficient.
- Renting servers in the cloud cuts down costs and allows for easy scalability.
- Choosing the right cloud provider is crucial for success.
- AWS is the leading cloud computing provider with a global market share of 31%.
- AWS offers a range of services in Compute, Database, Content Delivery, and more.
- EC2 is a flexible server that can be configured for various purposes.
- Lambda allows for code execution without managing servers.
- Elastic Beanstalk simplifies application deployment and management.
- Batch helps run batch computing workloads efficiently.
- Auto Scaling automatically adjusts the number of EC2 instances.
- Elastic Load Balancer evenly distributes traffic among instances.
- Using Elastic Load Balancer prevents inefficient resource usage.
- AutoScaling and Elastic Load Balancer should be used together.
- Deploying a new EC2 instance can be done through the AWS Console.
Overview of AWS Services
- EC2 (Elastic Compute Cloud) provides virtual servers in the cloud.
- EBS (Elastic Block Store) is a block storage service for EC2 instances.
- S3 (Simple Storage Service) is an object-oriented file system for storing files in buckets.
- Cloudfront is a content delivery network that caches websites to reduce latency.
- Glacier is a data archiving service for storing infrequently accessed data.
- Snowball is a physical device used for offline data transfers.
- Storage Gateway is a service for storing database snapshots.
- RDS (Relational Database Service) manages relational databases like MySQL, Oracle, and Aurora.
- DynamoDB is a NoSQL database management service.
- ElastiCache is a caching service for reducing database overhead.
- RedShift is a data warehouse service for analyzing data.
Launching an RDS Instance in AWS and Overview of AWS Services
- Go to the AWS Management Console and navigate to the RDS dashboard.
- Click on "Instances" and then "Launch DB Instance".
- Select the database type, environment, instance settings, storage type, and configure other settings.
- Provide a name, username, and password for the database.
- Choose the VPC and VPC Security Group for the instance.
- Select the desired database version and review the configurations.
- Click on "Launch DB Instance" to create the RDS instance.
- Wait for the RDS instance to be created.
- Use the command prompt to connect to the RDS instance using the MySQL client.
- AWS offers various services:
- RDS: A relational database management service.
- Aurora: A faster version of MySQL built by Amazon.
- DynamoDB: A noSQL database management service.
- ElastiCache: A caching environment to reduce latency.
- RedShift: A data warehouse service for data analysis.
- VPC: A virtual private cloud for AWS resources to interact.
- Direct Connect: A leased line for direct connection to AWS infrastructure.
- Route 53: A domain name system for URL to IP address conversion.
- CloudWatch: A monitoring tool for AWS resources.
- CloudFormation: A tool to templatize AWS infrastructure for easy deployment.
AWS Infrastructure Setup and Testing Summary
- Pricing for S3 storage is based on the amount of storage used, with lower rates for higher usage.
- The "pay less by using more" concept means that as usage of S3 storage increases, the pricing per GB per month decreases.
- The "save when you reserve" model allows users to reserve instances for one or three years, resulting in cost savings of up to 75%.
- Reserving instances is beneficial for long-term usage scenarios, while on-demand instances are more suitable for short-term or uncertain usage.
- Hosting a website on AWS infrastructure involves setting up a website address, a file server for uploading and storing images, and a database for managing image metadata.
- The architecture discussed involves the use of AWS resources such as Route 53, Elastic Beanstalk, EC2 instances, Auto Scaling, Load Balancer, RDS, S3, and IAM.
- Steps were taken to upload files to S3, store paths in a MySQL database, and confirm the presence of the uploaded file in the S3 bucket.
- Elastic Beanstalk environment was launched with load balancing and auto scaling enabled.
- A backup of the local database was taken and migrated to the RDS instance.
- The MySQL database file was imported to RDS and the tables and data were checked for successful import.
- Code files were updated to connect to the RDS database instead of the local MySQL database and tested for successful connection.
- Website was accessed and files were uploaded to confirm successful migration to AWS infrastructure.
- Route 53 was configured to connect to instances by creating a record set for the custom domain name.
Summary of Cloud Computing and AWS Session
- The session covered cloud computing and AWS topics.
- Different domains in AWS were discussed.
- The various AWS services were explained.
- Information on AWS pricing was provided.
- A practical use-case of migrating an application to AWS was presented.
- Participants were encouraged to ask for further clarification.
- Everyone expressed clarity and thanked the instructor.