Home » Posts tagged "log"

ExpressVPN 在土耳其的 VPN server 被抄...

ExpressVPN 在土耳其的 VPN server 被抄,為了調查大使的刺殺案件:「VPN Server Seized to Investigate Russian Ambassador’s Assassination」。

A VPN server operated by ExpressVPN was seized by Turkish authorities to investigate the assassination of Andrei Karlov, the Russian Ambassador to Turkey. Authorities hoped to find more information on people who removed digital traces of the assassin, but the server in question held no logs.

ExpressVPN 官方的回覆在「ExpressVPN statement on Andrey Karlov investigation」,主要的部份是:

As we stated to Turkish authorities in January 2017, ExpressVPN does not and has never possessed any customer connection logs that would enable us to know which customer was using the specific IPs cited by the investigators. Furthermore, we were unable to see which customers accessed Gmail or Facebook during the time in question, as we do not keep activity logs. We believe that the investigators’ seizure and inspection of the VPN server in question confirmed these points.

至於是不是真的,就需要時間確認了...

Amazon Elasticsearch 支援 I3 instance (i.e. 1.5 PB Disk) 了

Amazon Elasticsearch 支援 I3 instance 了:「Run Petabyte-Scale Clusters on Amazon Elasticsearch Service Using I3 instances」。

Amazon Elasticsearch Service now supports I3 instances, allowing you to store up to 1.5 petabytes of data in a single Elasticsearch cluster for large log analytics workloads.

i3.16xlarge 單台是 15.2 TB 的硬碟空間,100 台就會是 1.5 PB,不知道跑起來會多慢 XDDD

Amazon Elasticsearch Service – Amazon Web Services (AWS) | FAQs 這邊還沒修正 XD:

You can request a service limit increase up to 100 instances per domain by creating a case with the AWS Support Center. With 100 instances, you can allocate about 150 TB of EBS storage to a single domain.

AWS 推出 Amazon GuardDuty 進行內部網路監控

AWS 推出 Amazon GuardDuty 監控內部網路:「Amazon GuardDuty – Continuous Security Monitoring & Threat Detection」。

從示意圖可以看到結合了許多 log 資料,然後綜合判斷:

In combination with information gleaned from your VPC Flow Logs, AWS CloudTrail Event Logs, and DNS logs, this allows GuardDuty to detect many different types of dangerous and mischievous behavior including probes for known vulnerabilities, port scans and probes, and access from unusual locations.

所以連 Bitcoin 相關網站也當作條件之一 XD

開了相當多區 (相較於之前 AWS Elemental MediaOOXX 系列...):

Amazon GuardDuty is available in production form in the US East (Northern Virginia), US East (Ohio), US West (Oregon), US West (Northern California), EU (Ireland), EU (Frankfurt), EU (London), South America (São Paulo), Canada (Central), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Mumbai) Regions and you can start using it today!

CloudFlare 也要提供 Certificate Transparency 的 Log 伺服器了...

看到 CloudFlare 請求加入 Chromium (Google Chrome) 的伺服器列表:「Certificate Transparency - Cloudflare "nimbus2017" Log Server Inclusion Request」。

對照之前的「Chromium 內提案移除 HPKP (HTTP Public Key Pinning)」以及「Let's Encrypt 的 Embed SCT 支援」,這樣看起來是瀏覽器內會有一份白名單,只有在這白名單上的 Embed SCT 才會被信任...

但弄到這樣的話,log server 是不是也要有稽核機制?

好像哪邊搞錯了方向啊...

nginx 記錄 TLS 連線資訊

想要在 nginx 的 access log 裡面記錄使用者在 HTTPS 連線使用的 TLS protocol 與 cipher。

在「How can I let nginx log the used SSL/TLS protocol and ciphersuite?」這邊有提到方向是 $ssl_protocol$ssl_cipher (出自「Module ngx_http_ssl_module」內的 Embedded Variables 章節)。

他的方式是在前面就插入 protocol,但我希望前面的欄位保持不變,把 protocol & cipher 放到後面,所以我就加了一個 /etc/nginx/conf.d/combined_ssl.conf (這邊我用 ondrej 的 PPA,在設定檔裡會撈 /etc/nginx/conf.d/ 下的設定,不確定其他的情況如何):

#
log_format combined_ssl '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $ssl_protocol/$ssl_cipher';

然後本來用 combined 的 HTTPS 設定就改成 combined_ssl

來放一陣子再來分析,然後想看看要怎麼調整 cipher...

nginx 1.13 將支援 TLSv1.3

nginx 的「CHANGES」這邊看到 1.13.0 把 TLSv1.3 加進去了 (雖然都還沒成為正式的標準,但一堆人都在支援 XDDD):

*) Feature: the "TLSv1.3" parameter of the "ssl_protocols" directive.

這樣以後出 1.14 的 stable 版本應該就可以玩 TLSv1.3 了...

Amazon Aurora 的論文

AWS 老大介紹自家產品 Amazon Aurora 的論文:「Weekend Reading: Amazon Aurora: Design Considerations for High Throughput Cloud-Native Relational Databases.」,論文在「Amazon Aurora: Design Considerations for High Throughput Cloud-Native Relational Databases」這邊可以取得。

Amazon Aurora 算是用很特別的架構達到高可靠架構的需求,主要用了一堆已經很強大的底層,像是用 Amazon S3 來交換一堆資料。

不過 AWS 在論文裡的比較的事情其實並不合現實,因為現在的 MySQL 在做分散式架構時的方式其實並不一樣 (i.e. Galera Cluster),論文裡提的很多比較的項目,其實都不是其他方式會遇到的問題,所以就看看就好,畢竟是在推銷自家產品...

StackOverflow 上離開 Vim 方法的文章...

被拿出來當 PR 宣傳了:「Stack Overflow: Helping One Million Developers Exit Vim」。

由於 Vim 是 Unix-like 系統一定會內建的 editor,所以常常被拿來放在 tutorial 裡面 (考慮到普及性,但完全不熟的初學者就...),或是不小心在輸入 vipw 或是 visudo 之類的指令就中獎了:

可以看到 pageview 破一百萬次了 XDDD 而且流量也都很穩定:

依照地區來拆開的話:(不過沒有照人口數正規化...)

然後做交叉分析,看這些卡在 Vim 的人平常是看什麼其他的文章:

回到資料分析的角度來看,這些東西可以透過有 cookie 的 access log 做到。有 access log 後可以用 Google CloudBigQuery,也可以用 AWS 家的 Amazon Athena 做。

InnoDB redo log 大小對效能的影響

在「Benchmark(et)ing with InnoDB redo log size」這邊看到在討論 InnoDB redo log 的大小對效能的影響 (也就是 innodb_log_file_sizeinnodb_log_files_in_group)。

開頭就有先提到重點,在新版 MySQL 裡,幾乎所有的情況比較大的 redo log 有比較好的效能 (平均值):

tl;dr - conclusions specific to my test

  1. A larger redo log improves throughput
  2. A larger redo log helps more with slower storage than with faster storage because page writeback is more of a bottleneck with slower storage and a larger redo log reduces writeback.
  3. A larger redo log can help more when the working set is cached because there are no stalls from storage reads and storage writes are more likely to be a bottleneck.
  4. InnoDB in MySQL 5.7.17 is much faster than 5.6.35 in all cases except IO-bound + fast SSD

可以看出來平均效能的提昇很顯著,不管是增加 redo log 大小還是升級到 5.7:

但作者也遇到了奇怪的效能問題。雖然平均效能提昇得很顯著,但隨著加入資料的增加,效能的 degradation 其實很嚴重,在原來的網頁上可以看到這些資訊。

The results above show average throughput and that hides a lot of interesting behavior. We expect throughput over time to not suffer from variance -- for both InnoDB and for MyRocks. For many of the results below there is a lot of variance (jitter).

所以也許現階段先加大就好 (至少寫入的效能會提昇),不需要把這個特性當作升級 MySQL 的理由。

Archives