Application Load Balancers now support a slow start mode that allows you to add new targets without overwhelming them with a flood of requests. With the slow start mode, targets warm up before accepting their fair share of requests based on a ramp-up period that you specify.
然後時間可以設定,從 30 秒到 15 分鐘:
Slow start mode can be enabled by target group and can be configured for a duration of 30 seconds to 15 minutes. The load balancer linearly increases the number of requests sent to a new target in a target group up to its fair share during the slow start ramp-up window.
Previously, to ensure that services were able to discover and connect with each other, you had to configure and run your own service discovery system or connect every service to a load balancer. Now, you can enable service discovery for your containerized services with a simple selection in the ECS console, AWS CLI, or using the ECS API.
Recently, we proposed a reference architecture for ELB-based service discovery that uses Amazon CloudWatch Events and AWS Lambda to register the service in Amazon Route 53 and uses Elastic Load Balancing functionality to perform health checks and manage request routing. An ELB-based service discovery solution works well for most services, but some services do not need a load balancer.
最直接的影響就是 ELB 的部份了,透過 ELB 處理前端的話,覆載平衡以及數量限制的問題就會減輕很多 (之前是靠 Round-robin DNS 打散,而且限制一次最多回應五個 record):
Beginning today, you can use the Amazon Route 53 Auto Naming APIs to create CNAME records when you register instances of your microservices, and your microservices can discover the CNAMEs by querying DNS for the service name. Additionally, you can use the Amazon Route 53 Auto Naming APIs to create Route 53 alias records that route traffic to Amazon Elastic Load Balancers (ELBs).
We wanted to provide a bit more context for the most recent login issues and service instability. All of our cloud services are affected by updates required to mitigate the Meltdown vulnerability. We heavily rely on cloud services to run our back-end and we may experience further service issues due to ongoing updates.
When Amazon Route 53 receives a DNS query for the name of an instance, such as backend.example.com, it responds with up to eight IP addresses (for A or AAAA records) or up to eight SRV record values.
Today we are launching a preview (sign up now) of Amazon Aurora Serverless. Designed for workloads that are highly variable and subject to rapid change, this new configuration allows you to pay for the database resources you use, on a second-by-second basis.
然後是極為快速的 auto-scaling:
The endpoint is a simple proxy that routes your queries to a rapidly scaled fleet of database resources. This allows your connections to remain intact even as scaling operations take place behind the scenes. Scaling is rapid, with new resources coming online within 5 seconds
Amazon Aurora Multi-Master allows you to create multiple read/write master instances across multiple Availability Zones. This enables applications to read and write data to multiple database instances in a cluster, just as you can read across Read Replicas today.