EC2 的 C5 系列機器推出 12x、24x 以及 metal 類別

EC2c5 系列主機推出了 12xlarge24xlarge 以及 metal 三種新的類別:「Now Available: New C5 instance sizes and bare metal instances」。

本來的 c5largexlarge2xlarge4xlarge9xlarge (這邊有一些不規則了),現在多了 12xlarge24xlarge,應該是大到需要依照硬體規格對齊數字了?

另外一個是,c5.24xlargec5.metal 其實是一樣的硬體規格 (以及價錢),主要的差異在 c5.metal 沒有虛擬化,所以除了壓榨效能以外,還可以看到很多硬體資訊:

  • do not want to take the performance hit of nested virtualization,
  • need access to physical resources and low-level hardware features, such as performance counters and Intel VT that are not always available or fully supported in virtualized environments,
  • are intended to run directly on the hardware, or licensed and supported for use in non-virtualized environments.

在 AWS 強制使用 MFA 的情況下,透過 CLI 操作

AWS 可以設定 IAM 使用者強制使用 MFA (包括 API 的操作),在這種情況下如果要使用 AWS Command Line Interface 就得透過 AWS STS (AWS Security Token Service) 產生另外一組 key + secret + session key,然後這組通行時間預設是 12 小時。

相關的文章可以參考「How do I use an MFA token to authenticate access to my AWS resources through the AWS CLI?」這篇,然後我就寫了一段 shell script 來做這件事情。

首先是在 ~/.aws/config 內放入 MFA 的 ARN,像是這樣:

[profile mycompany]
mfa = arn:aws:iam::012345678901:mfa/gslin
region = us-east-1

然後就可以用 aws.mfa mycompany 指令產生出一個會把 key + secret + session key 包進去的 shell:

function aws.mfa() {
    local MFA_ARN
    local MFA_TOKEN
    local PROFILE="$1"
    local STSDATA

    MFA_ARN="$(python3 -c "import configparser; import os; c=configparser.ConfigParser(); c.read('{}/.aws/config'.format(os.environ['HOME'])); print(c['profile ${PROFILE}']['mfa'])")
"
    echo "Reading ${PROFILE} and going for token ${MFA_ARN} ..."

    echo -n 'MFA Password: '
    read -r MFA_TOKEN

    STSDATA="$(aws --profile "${PROFILE}" sts get-session-token --serial-number "${MFA_ARN}" --token-code "${MFA_TOKEN}")"
    export AWS_ACCESS_KEY_ID="$(echo "${STSDATA}" | jq -r .Credentials.AccessKeyId)"
    export AWS_SECRET_ACCESS_KEY="$(echo "${STSDATA}" | jq -r .Credentials.SecretAccessKey)"
    export AWS_SESSION_TOKEN="$(echo "${STSDATA}" | jq -r .Credentials.SessionToken)"

    echo 'Running an independant shell...'
    ${SHELL}
}

很明顯的裡面用到了 Python 3 與 jq,這兩個應該都可以直接裝系統的版本就可以了。

後續的操作就跟原來的用法都一樣,像是 aws --region=us-east-1 s3 ls 這樣的指令。

Stripe 遇到 AWS 上 DNS Resolver 的限制

當量夠大就會遇到各種限制...

這次 Stripe 在描述 trouble shooting 的過程:「The secret life of DNS packets: investigating complex networks」。

其中一個頗有趣的架構是他們在每台主機上都有跑 Unbound,然後導去中央的 DNS Resolver,再決定導去 Consul 或是 AWS 的 DNS Resolver:

Unbound runs locally on every host as well as on the DNS servers.

然後他們發現偶而會有大量的 SERVFAIL

接下來就是各種找問題的過程 (像是用 tcpdump 看情況,然後用 iptables 統計一些數字),最後發現是卡在 AWS 的 DNS Resolver 在 60 秒內只回應了 61,385 packets,換算差不多是 1,023 packets/sec,這數字看起來就很雷:

During one of the 60-second collection periods the DNS server sent 257,430 packets to the VPC resolver. The VPC resolver replied back with only 61,385 packets, which averages to 1,023 packets per second. We realized we may be hitting the AWS limit for how much traffic can be sent to a VPC resolver, which is 1,024 packets per second per interface. Our next step was to establish better visibility in our cluster to validate our hypothesis.

在官方文件「Using DNS with Your VPC」這邊看到對應的說明:

Each Amazon EC2 instance limits the number of packets that can be sent to the Amazon-provided DNS server to a maximum of 1024 packets per second per network interface. This limit cannot be increased. The number of DNS queries per second supported by the Amazon-provided DNS server varies by the type of query, the size of response, and the protocol in use. For more information and recommendations for a scalable DNS architecture, see the Hybrid Cloud DNS Solutions for Amazon VPC whitepaper.

iptables 看到的量則是:

找到問題後,後面就是要找方法解決了... 他們給了一個只能算是不會有什麼副作用的 workaround,不過也的確想不到太好的解法。

因為是查詢 10.0.0.0/8 網段反解產生大量的查詢,所以就在各 server 上的 Unbound 上指定這個網段直接問 AWS 的 DNS Resolver,不需要往中央的 DNS Resolver 問,這樣在這個場景就不會遇到 1024 packets/sec 問題了 XDDD

Amazon EBS 的預設加密機制

EBS 有選項可以預設開加密了:「New – Opt-in to Default Encryption for New EBS Volumes」。

不算太意外的,這個選項要一區一區開:

Per-Region – As noted above, you can opt-in to default encryption on a region-by-region basis.

也不太意外的,無法完全公開共用:

AMI Sharing – As I noted above, we recently gave you the ability to share encrypted AMIs with other AWS accounts. However, you cannot share them publicly, and you should use a separate account to create community AMIs, Marketplace AMIs, and public snapshots. To learn more, read How to Share Encrypted AMIs Across Accounts to Launch Encrypted EC2 Instances.

然後有些服務有自己的 EBS 設定,這次不受影響。而有些底層其實是用 EC2 的服務 (然後開 EBS) 就會直接套用了:

Other AWS Services – AWS services such as Amazon Relational Database Service (RDS) and Amazon WorkSpaces that use EBS for storage perform their own encryption and key management and are not affected by this launch. Services such as Amazon EMR that create volumes within your account will automatically respect the encryption setting, and will use encrypted volumes if the always-encrypt feature is enabled.

EC2 簡化了 Hibernation 的需求

因為在記憶體內會有各種敏感資訊,所以 EC2 的 Hibernation 當初推出時要求在寫到 snapshot 時要有加密,而 EC2 的設計需要使用 encrypted AMI 啟動,才能產生 encrypted snapshot:

Hibernation requires an EC2 instance be an encrypted EBS-backed instance. This ensures protection of sensitive contents in memory (RAM) as they get copied to the EBS upon hibernation.

這點對一般人來說就比較麻煩了,因為 AMI 裡面可能沒有敏感資訊,所以當初都是設計成 unencrypted 狀態,變成要多一些步驟...

而現在 EC2 宣佈可以可以用一般的 AMI 啟動並且產生出加密的 snapshot:「Enable Hibernation on EC2 Instances when launching with an AMI without an Encrypted EBS Snapshot」。

這樣一來省掉不少前置作業...

靜態網頁服務的選擇

Hacker News 上看到「Show HN: Scar – Static websites with HTTPS, a global CDN, and custom domains (github.com)」這篇,除了文章連結外,留言提到了不少工具...

一種是透過 GitHub Pages 的方式提供服務,或是透過 Netlify,需求真的需要動到 AWS 元件的情況其實還是考慮用傳統一點的架構 (EC2 或是 VPS) 會更有彈性。

算是提出一個雞肋後,其他人把真正有用的工具整理了出來...

Elasticsearch 提供免費版本的安全功能

Elasticsearch 決定將基本的安全功能從付費功能轉為免費釋出,很明顯的是受到 Open Distro for Elasticsearch 的壓力而做出的改變:「Security for Elasticsearch is now free」。

要注意的是這不是 open source 版本,只是將這些功能放到 basic tier 裡讓使用者免費使用:

Previously, these core security features required a paid Gold subscription. Now they are free as a part of the Basic tier. Note that our advanced security features — from single sign-on and Active Directory/LDAP authentication to field- and document-level security — remain paid features.

這代表 Open Distro for Elasticsearch 提供的還是比較多:

With Open Distro for Elasticsearch, you can leverage your existing authentication infrastructure such as LDAP/Active Directory, SAML, Kerberos, JSON web tokens, TLS certificates, and Proxy authentication/SSO for user authentication. An internal user repository with support for basic HTTP authentication is also avaliable for easy setup and evaluation.

Granular, role-based access control enables you to control the actions a user can perform on your Elasticsearch cluster. Roles control cluster operations, access to indices, and even the fields and documents users can access. Open Distro for Elasticsearch also supports multi-tenant environments, allowing multiple teams to share the same cluster while only being able to access their team's data and dashboards.

目前看起來還是可以朝 Open Distro for Elasticsearch 靠過去...

改 Open Distro for Elasticsearch 預設密碼的方式

AWS 弄出來的 Open Distro for Elasticsearch 因為內建了安全性的功能 (參考「AWS 對 Elastic Stack 實作免費的開源版本 Open Distro for Elasticsearch」),應該是目前新架設 Elasticsearch 的首選。

不過裝好後預設有五個帳號,但從 Open Distro 的 Kibana 介面無法修改改其中兩個使用者的密碼 (adminkibanaserver),要修改密碼發現得花不少功夫,不知道為什麼要這樣設計 :/

這邊講的是透過 RPM (以及 deb) 的方式的修改方式,如果是 Docker 的方式請參考後面提到在 AWS blog 上的文章:「Change your Admin Passwords in Open Distro for Elasticsearch」。

首先先透過 hash.sh 產生 bcrypt 的 hash,像是這樣 (輸入 password 當密碼):

bash /usr/share/elasticsearch/plugins/opendistro_security/tools/hash.sh
WARNING: JAVA_HOME not set, will use /usr/bin/java
[Password:]
$2y$12$QchkpgY8y/.0TL7wruWq4egBDrlpIaURiBYKoZD50G/twdylgwuhG

然後修改 /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml 檔案裡面的值,順便改掉 readonly 的部分。

接下來是把這個 internal_users.yml 檔案的設定更新到 Elasticsearch 裡。由於這邊需要讀 /etc/elasticsearch/ 的東西,所以偷懶用 root 跑:

sudo bash /usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh -cd ../securityconfig/ -icl -nhnv -cacert /etc/elasticsearch/root-ca.pem -cert /etc/elasticsearch/kirk.pem -key /etc/elasticsearch/kirk-key.pem

做完後可能要重跑 Elasticsearch 與 Kibana:

sudo service elasticsearch restart
sudo service kibana restart

或是重開機... 順便測試看看重開後有沒有生效。理論上修改完成後,就是用新的帳號密碼連到 Kibana。

上面的方法是參考了「Default Password Reset」(先找到這篇) 與「change admin password」(後來在 AWS blog 的文章上發現的 GitHub issue 連結) 這邊的資訊。

官方的說明文件則是在寫這篇文章時才找到的,平常搜尋時不太會出現:「Apply configuration changes」。

Amazon S3 要拿掉 Path-style 存取方式

Hacker News 上翻的時候翻到的公告:「Announcement: Amazon S3 will no longer support path-style API requests starting September 30th, 2020」。

現有的兩種方法,一種是把 bucket name 放在 path (V1),另外一種是把 bucket name 放在 hostname (V2):

Amazon S3 currently supports two request URI styles in all regions: path-style (also known as V1) that includes bucket name in the path of the URI (example: //s3.amazonaws.com/<bucketname>/key), and virtual-hosted style (also known as V2) which uses the bucket name as part of the domain name (example: //<bucketname>.s3.amazonaws.com/key).

這次要淘汰的是 V1 的方式,預定在 2020 年十月停止服務 (服務到九月底):

Customers should update their applications to use the virtual-hosted style request format when making S3 API requests before September 30th, 2020 to avoid any service disruptions. Customers using the AWS SDK can upgrade to the most recent version of the SDK to ensure their applications are using the virtual-hosted style request format.

Virtual-hosted style requests are supported for all S3 endpoints in all AWS regions. S3 will stop accepting requests made using the path-style request format in all regions starting September 30th, 2020. Any requests using the path-style request format made after this time will fail.

DynamoDB 也有固定的 IP address 區段了

AWS 宣佈 DynamoDB 也有固定的 IP address 區段了:「AWS specifies the IP address ranges for Amazon DynamoDB endpoints」,對於使用 IP firewall 的人可以多一些控制權。

資料可以在 https://ip-ranges.amazonaws.com/ip-ranges.json 這邊抓到,裡面 serviceDYNAMODB 的就是了。

因為沒看到 IPv6 的位置,才發現 DynamoDB 目前沒有提供 IPv6 Endpoint...