Home » Posts tagged "ec2" (Page 2)

Microsoft SQL Server 可以跑在 t2.large 與 t2.xlarge 了...

AWS 宣佈 Microsoft SQL Server 可以跑在 t2 系列的機器上了:「Amazon EC2 T2 instance types are now supported on Windows with SQL Server Enterprise」。

不過應該是因為記憶體限制,目前只開放 t2.xlarge (8GB RAM) 與 t2.2xlarge (16GB RAM) 上可以跑:

Windows with SQL Server Enterprise Edition is now available on t2.xlarge and t2.2xlarge instance types.

馬上可以想到的是測試環境,另外就是某些不能關機的內部系統,可以用離峰時間累積 CPU credit 之類的應用?

AWS 推出 AWS Instance Scheduler,定時幫你開關機的服務...

第一眼在「Introducing the AWS Instance Scheduler」看到「AWS Instance Scheduler」的描述時,跟之前推出的 Scheduled Reserved Instances 搞混...

The AWS Instance Scheduler is a solution that enables customers to easily configure custom start and stop schedules for their Amazon EC2 and Amazon RDS instances. The solution is easy to deploy and can help reduce operational costs for both development and production environments. Customers who use this solution to run instances during regular business hours can save up to 70% compared to running those instances 24 hours a day.

以這張圖來說就更清楚,AWS Instance Scheduler 就是指定時間,定時幫你開關機的服務:

而我搞混的 Scheduled Reserved Instances 是買某個時段的 RI,是作帳議題。不過兩個看起來就很適合搭在一起用...

Amazon EC2 的可用頻寬提昇

AWSJeff Barr 宣佈了有 ENAEC2 instance 的頻寬提升到 25Gbps:「The Floodgates Are Open – Increased Network Bandwidth for EC2 Instances」。

分成三種,第一種是對 S3 的頻寬提昇:

EC2 to S3 – Traffic to and from Amazon Simple Storage Service (S3) can now take advantage of up to 25 Gbps of bandwidth. Previously, traffic of this type had access to 5 Gbps of bandwidth. This will be of benefit to applications that access large amounts of data in S3 or that make use of S3 for backup and restore.

第二種是 EC2 對 EC2 (內網):

EC2 to EC2 – Traffic to and from EC2 instances in the same or different Availability Zones within a region can now take advantage of up to 5 Gbps of bandwidth for single-flow traffic, or 25 Gbps of bandwidth for multi-flow traffic (a flow represents a single, point-to-point network connection) by using private IPv4 or IPv6 addresses, as described here.

第三種也是 EC2 對 EC2,但是是在同一個 Cluster Placement Group:

EC2 to EC2 (Cluster Placement Group) – Traffic to and from EC2 instances within a cluster placement group can continue to take advantage of up to 10 Gbps of lower-latency bandwidth for single-flow traffic, or 25 Gbps of lower-latency bandwidth for multi-flow traffic.

有 ENA 的有這些,好像沒看到 CentOS

ENA-enabled AMIs are available for Amazon Linux, Ubuntu 14.04 & 16.04, RHEL 7.4, SLES 12, and Windows Server (2008 R2, 2012, 2012 R2, and 2016). The FreeBSD AMI in AWS Marketplace is also ENA-enabled, as is VMware Cloud on AWS.

Netflix 在 EC2 上調整的參數

Brendan GreggNetflixEC2 上調整的參數整理了出來:「AWS re:Invent 2017: How Netflix Tunes EC2」。

這些參數在 2017 的 AWS re:Invent 時有講到,他整理出來讓大家更方便參考:

My last talk for 2017 was at AWS re:Invent, on "How Netflix Tunes EC2 Instances for Performance," an updated version of my 2014 talk.

裡面有提到這是針對 Ubuntu 16.04 的調整 (而且是在 2017 年的版本,應該是 16.04.3?),用之前請理解每個參數:

WARNING: These tunables were developed in late 2017, for Ubuntu Xenial instances on EC2.

Percona 分析在 AWS 上跑 Percona XtraDB Cluster 的效能 (I/O bound)

Percona 的人分析了在 Amazon EC2 上跑 Percona XtraDB Cluster (PXC) 效能 (I/O bound):「Best Practices for Percona XtraDB Cluster on AWS」。

先看他們做出來的圖:

直接跳到結論的地方。如果資料可以掉,用 i3 本地 storage 的效能是最好的,如果要資料不能掉,用 EBS 的 Provisioned IOPS SSD (io1) 的效能會比 General Purpose (gp2) 好很多。

另外 instance type 的選擇上,避免用 {i3,r4}.large,因為測試出來發現 {i3,r4}.xlarge 的效能好不只一倍。

不過 Aurora 的 Multi-master 已經在 Preview 了啊,如果 Percona 的人拿到帳號的話,應該會有單位成本的效能比較可以看...

Amazon EC2 各種虛擬化技術的效能

Brendan Gregg 整理了 Amazon EC2 的各種虛擬化技術以及效能的比較:「AWS EC2 Virtualization 2017」,從他做的這張圖可以看到最新的兩個技術 (編號 7 與 8) 的效能相當好:

主要分成三種虛擬化技術:

  • Virtualized in Software: While this can support an unmodified guest OS, many operations are emulated and slow. Apps may run 2x to 10x slower, or worse.
  • Paravirtualization: The hypervisor provides efficient hypercalls, and the guest OS uses drivers and kernel modifications to call these hypercalls. It's using software and coordination between the hypervisor and guest to improve performance. I'd expect measurable overhead of 10% to 50% (depending on the PV type and workload).
  • Virtualized in Hardware: Hardware support for virtualization, and near bare-metal speeds. I'd expect between 0.1% and 1.5% overhead.

用硬體虛擬化的過程... 然後最後也推出 bare metal 的機器 XD

Amazon EC2 可以設定 Spread Placement Group 要求打散機器

Amazon EC2 推出的新設計 Spread Placement Group,用來打散 instance:「Introducing Spread Placement Groups for Amazon EC2」。

本來的 Cluster Placement Group 是將機器集中在一起,對於 latency 極為敏感的應用會有幫助 (像是下面有提到 HPC 應用);這次推出的 Spread Placement Group 則是要求跑在不同機器上,打散降低風險:

Spread placement groups help reduce the likelihood of failures within clusters or groups of instances. Amazon EC2 has had cluster placement groups, which enable applications to get the low-latency network performance necessary for tightly-coupled node-to-node communication typical of many HPC applications. Now with spread placement groups, member instances will be placed on distinct hardware, reducing the impact of hardware failures on your applications.

然後也是全區開放:

Spread placement groups are available in all AWS regions. To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs.

AWS 提供 Windows 上的 Deep Learning AMI

有一些 Windows 上的東西就可以直接開起來跑了:「Announcing New AWS Deep Learning AMI for Microsoft Windows」。

目前支援 2012 R2 與 2016:

Amazon Web Services now offers an AWS Deep Learning AMI for Microsoft Windows Server 2012 R2 and 2016.

然後 driver 與常用的東西都包進去了:

The AMIs also include popular deep learning frameworks such as Apache MXNet, Caffe and Tensorflow, as well as packages that enable easy integration with AWS, including launch configuration tools and many popular AWS libraries and tools. The AMIs come prepackaged with Nvidia CUDA 9, cuDNN 7, and Nvidia 385.54 drivers, and contain the Anaconda platform (supports Python versions 2.7 and 3.5).

Amazon EC2 推出 T2 Unlimited,可以付費超量使用 CPU

Amazon EC2t2 系列的機器上推出 T2 Unlimited:「T2 Unlimited – Going Beyond the Burst with High Performance」。

這不是新的機種,而是現有的機器上可以超量使用 CPU credit,AWS 會另外收費。

新開的機器與已經開的機器都可以打開:

us-east-1 來算,其實相當便宜,看不出什麼 penalty fee:t2.micro 的 CPU credit 是 10% baseline,每小時單價是 $0.0116,所以先有個 100% 數字是 $0.116 的概念 (如果所有東西都是十倍)。

us-east-1 的 T2 Unlimited 是 $0.05 vCPU-hour,這樣看起來其實不賴?風險應該是在於不保證可以拿到多的 CPU resource...

可能要重新算一下 c4c5 的使用方式了...

另外雖然文章後面寫了一大串,但對照 region 表後,看起來是所有的區域都支援了:(美國政府的 region 除外)

You can launch T2 Unlimited instances today in the US East (Northern Virginia), US East (Ohio), US West (Northern California), US West (Oregon), Canada (Central), South America (São Paulo), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Mumbai), Asia Pacific (Seoul), EU (Frankfurt), EU (Ireland), and EU (London) Regions today.

Amazon EC2 再推出兩款新機種:M5 與 H1

Amazon EC2 的新機種發表:「M5 – The Next Generation of General-Purpose EC2 Instances」、「H1 Instances – Fast, Dense Storage for Big Data Applications」。

M5M4 的後續機種 (General Purpose),所以就沒什麼特別好講的了... H1 則是與 D2 接近,而且也應該算是後繼的機種 (Dense Storage),看了看內容感覺只是個升級 (雖然給他一個新的 family type),所以也沒什麼好講...

然後支援的地區都很少...

M5 三區:

You can launch M5 instances today in the US East (Northern Virginia), US West (Oregon), and EU (Ireland) Regions in On-Demand and Spot form (Reserved Instances are also available), with additional Regions in the works.

H1 四區:

H1 instances are available today in the US East (Northern Virginia), US West (Oregon), US East (Ohio), and EU (Ireland) Regions.

產品發表會固定會有的升級 XD

Archives