Amazon Web Services launched in 2006 with a total of three products, storage buckets, compute instances, and a messaging queue. Today, it offers a mind-numbing 200 and something services. And what's most confusing is that many of them appear to do almost the exact same thing.
It's kind of like shopping at a big grocery store where you have different aisles of product categories filled with things to buy that meet the needs of virtually every developer on the planet. In today's video, we'll walk down these aisles to gain an understanding of over 50 different AWS products. So first, let's start with a few that are above my pay grade that you may not know exist.
If you're building robots, you can use RoboMaker to simulate and test your robots at scale. Then once your robots are in people's homes, you can use IoT Core to collect data from them, update their software, and manage them remotely. If you happen to have a satellite orbiting Earth, you can tap into Amazon's global network of antennas to connect data through its ground station service.
And if you want to start experimenting and researching the future of computing, you can use Bracket to to interact with a quantum computer, but most developers go to the cloud to solve more practical problems. And for that, let's head to the compute aisle. One of the original AWS products was Elastic Compute Cloud.
It's one of the most fundamental building blocks on the platform and allows you to create a virtual computer in the cloud. Choose your operating system, memory, and computing power, then you can rent that space in the cloud like you're renting an apartment that you pay for by the second. A common use case is to use an instance as a server for web application.
But one problem is that as your app grows, you'll likely need to distribute traffic across multiple instances. In 2009, Amazon introduced Elastic Load Balancing, which allowed developers to distribute traffic to multiple instances automatically. In addition, the CloudWatch service can collect logs and metrics from each individual instance.
The data collected from CloudWatch can then be passed off to autoscale, in which you define policies that create new instances as they become needed based on the traffic and utilization of your current infrastructure. These tools were revolutionary at the time, but developers still wanted an easier way to get things done. And that's where Elastic Beanstalk comes in. Most developers in 2011 just wanted to deploy a Ruby on Rails app.
Elastic Beanstalk made that much easier by providing an additional layer of abstraction on top of EC2 and other autoscaling features. Choose a template, deploy your code, and let all the auto-scaling stuff happen automatically. This is often called a platform-as-a-service, but in some cases, it's still too complicated.
If you don't care about the underlying infrastructure whatsoever and just want to deploy a WordPress site, LightSail is an alternative option where you can point and click at what you want to deploy and worry even less about the underlying configuration. In all these cases, you are deploying a static server that is always running in the cloud. But many computing jobs are ephemeral, which means they don't rely on any persistent state on the server, so why bother deploying a server for code like that? In 2014, Lambda came out, which are functions as a service, or serverless computing.
With Lambda, you simply upload your code, then choose an event that decides when that code should run. Traffic scaling and networking are all things that happen entirely in the background, and unlike a dedicated server, you only pay for the exact number of requests and computing time that you use. Now, if you don't like writing your own code, you can use the serverless application repository to find pre-built functions that you can deploy with the click of a button.
But what if you're a huge enterprise with a bunch of its own servers? Outposts is a way to run AWS APIs on your own infrastructure without needing to throw your old servers in the garbage. In other cases, you may want to interact with AWS from remote or extreme environments, like if you're a scientist in the Arctic. Snow devices are like little mini data centers that can work without internet in hostile environments.
So that gives us some fundamental ways to compute things, but many apps today are standardized with Docker containers, allowing them to run on multiple different clouds or computing environments with very little effort. To run a container, you first need to create a Docker image and store it somewhere. Elastic Container Registry allows you to upload an image, allowing other tools like Elastic Container Service to pull it back down and run it. ECS is an API for starting, stopping, and allocating virtual machines to your containers. and allows you to connect them to other products like load balancers.
Some companies may want more control over how their app scales, in which case EKS is a tool for running Kubernetes. But in other cases, you may want your containers to behave in a more automated way. Fargate is a tool that will make your containers behave like serverless functions, removing the need to allocate EC2 instances for your containers. But if you're building an application and already have it containerized, the easiest way to deploy it to AWS is AppRunner. This is a new product in 2021 where you simply point it to a container image while it handles all the orchestration and scaling behind the scenes.
But running an application is only half the battle. We also need to store data in the cloud. Simple Storage Service, or S3, was the very first product offered by AWS. It can store any type of file or object like an image or video and is based on the same infrastructure as Amazon's e-commerce site.
It's great for general purpose file storage, but if you don't access your files very often, You can archive them in Glacier, which has a higher latency but a much lower cost. On the other end of the spectrum, you may need storage that is extremely fast and can handle a lot of throughput. Elastic block storage is ideal for applications that have intensive data processing requirements, but requires more manual configuration by the developer. Now, if you want something that's highly performant and also fully managed, Elastic File System provides all the bells and whistles but at a much higher cost. In addition to raw files, developers also need to store structured data for their end users.
And that brings us to the database aisle, which has a lot of different products to choose from. The first ever database on AWS was SimpleDB, a general purpose NoSQL database, but it tends to be a little too simple for most people. Everybody knows you never go full retard.
It was followed up a few years later with DynamoDB, which is a document database that's very easy to scale horizontally. It's inexpensive and provides fast read performance, but it isn't very good at modeling relational data. If you're familiar with MongoDB, another document database option is DocumentDB.
It's a controversial option that's technically not MongoDB that has a one-to-one mapping of the MongoDB API to get around restrictive open source licensing. Speaking of which, Amazon also did a similar thing with Elasticsearch, which itself is a great option if you want to build something like a full-text search engine. But the majority of developers out there will opt for a traditional relational SQL database.
Amazon Relational Database Service, RDS, supports a variety of different SQL flavors and can fully manage things like backups, patching, and scale. But Amazon also offers its own proprietary flavor of SQL called Aurora. It's compatible with Postgres or MySQL. and can be operated with better performance at a lower cost. In addition, Aurora offers a new serverless option that makes it even easier to scale and you only pay for the actual time that the database is in use.
Relational databases are a great general purpose option, but they're not the only option. Neptune is a graph database that can achieve better performance on highly connected datasets, like a social graph or recommendation engine. If your current database is too slow, you may want to bring in Elastic Cache, which is a fully managed version of Redis, an in-memory database.
that delivers data to your end users with extremely low latency. If you work with time series data, like the stock market for example, you might benefit from Timestream, a time series database with built-in functions for time-based queries and additional features for analytics. Yet another option is the Quantum Ledger database, which allows you to build an immutable set of cryptographically signed transactions very similar to decentralized blockchain technology. Now let's shift gears and talk about analytics.
To analyze data, you first need a place to store it, and a popular option for doing that is Redshift, which is a data warehouse that tries to get you to shift away from Oracle. Warehouses are often used by big enterprises to dump multiple data sources from the business where they can be analyzed together. When all your data is in one place, it's easier to generate meaningful analytics and run machine learning on it.
Data in a warehouse is structured so it can be queried, but if you need a place to put a large amount of unstructured data, you can use AWS Lake Formation. which is a tool for creating data lakes or repositories that store a large amount of unstructured data, which can be used in addition to data warehouses to query a larger variety of data sources. If you want to analyze real-time data, you can use Kinesis to capture real-time streams from your infrastructure, then visualize them in your favorite business intelligence tool.
Or you can use a stream processing framework like Apache Spark that runs on Elastic MapReduce. which itself is a service that allows you to operate on massive datasets efficiently with a parallel distributed algorithm. Now, if you don't want to use Kinesis for streaming data, a popular alternative is Apache Kafka. It's open source and Amazon MSK is a fully managed service to get you started. But for the average developer, all this data processing may be a little too complicated.
Glue is a serverless product that makes it much easier to extract, transform, and load your data. It can automatically connect to other data sources on AWS like Aurora, Redshift, and S3, and has a tool called Glue Studio so you can create jobs without having to write any actual source code. But one of the biggest advantages of collecting massive amounts of data is that you can use it to help predict the future, and AWS has a bunch of tools in the machine learning aisle to make that process easier.
But first, if you don't have any high quality data of your own, you can use the data exchange to purchase and subscribe to data from third-party sources. Once you have some data in the cloud, you can use SageMaker to connect to it and start building machine learning models with TensorFlow or PyTorch. It operates on multiple levels to make machine learning easier, and provides a managed Jupyter notebook that can connect to a GPU instance to train a machine learning model then deploy it somewhere useful.
That's cool, but building your own ML models from scratch is still extremely difficult. If you need to do image analysis, you may as well just use the Recognition API. It can classify all kinds of objects and images and is likely way better than anything that you would build on your own. Or if you want to build a conversational bot, you might use Lex, which runs on the same technology that powers Alexa devices.
Or if you just want to have fun and learn how machine learning works, you might buy a DeepRacer device, which is an actual race car that you can drive with your own machine learning code. Now that's a pretty amazing way to get people to use your cloud platform. But let's change direction and look at a few other essential tools that are used by a wide variety of developers.
For security, we have IM, where you can create roles and determine who has access to what on your AWS account. If you're building a web or mobile app where users can log into an account, Cognito is a tool that enables them to log in with a variety of different authentication methods and manages the user sessions for you. Then, once you have a few users logged into your app, you may want to send them push notifications.
SNS is a tool that can get that job done. Or maybe you want to send emails to your users. SES is the tool for that. Now that you know about all these tools, you're going to want an organized way to provision them.
CloudFormation is a way to create templates based on your infrastructure in YAML or JSON, allowing you to enable hundreds of different services with the single click of a button. From there, you'll likely want to interact with those services from a front-end application like iOS, Android, or the web. Amplify provides SDKs that can connect to your infrastructure from JavaScript frameworks and other front-end applications.
Now, the final thing to remember is that all this is going to cost you a ton of money. which goes directly to getting Jeff's rocket up. So make sure to use AWS Cost Explorer and budgets if you don't want to pay for these big bulging rockets.
That's the end of the video. It took a ton of work, so please like and subscribe to support the channel, or become a pro member at Fireship.io to get access to more advanced content about building apps in the cloud. Thanks for watching, and I will see you in the next one.