Tag Archives: log

「Oh shit, git!」:常見的 Git 操作問題

Oh shit, git!」這個站台列出了常見的 Git 問題與可能的解法,不過我最愛最後一個,看起來很萬用的砍掉重練:

cd ..
sudo rm -r fucking-git-repo-dir
git clone https://some.github.url/fucking-git-repo-dir.git
cd fucking-git-repo-dir

雖然最後這個方法有種惡搞感,不過裡面其他提到的問題其實都還蠻常見的,包括了修改最新一個 commit 的 commit log,或是想要再多加一些東西到最新的 commit 上,以及 commit 錯 branch 後要怎麼處理。

在 Gmail 裡面展開 GitHub 信件連結的 Userscript

從以前用 Subversion 就有習慣用 e-mail 收 diff log (內建的 hook 就有這個功能)。後來有了 Gmail 就更方便了,畢竟搜尋是 Google 的強項,另外一方面 Gmail 也是目前少數可以完全用鍵盤操作的 e-mail client。

現在用 GitGitHub 平台讓這件事情麻煩不少,得透過 webhook + API 自己包出來,目前折衷的方法是收 GitHub 的通知信 (但是沒有 diff log),然後寫個 Userscript,在按下 i 之後會把 GitHub 通知信裡的連結全部點開:「open-github-links-in-gmail」。

寫的很簡單... 最花時間反而是因為 hotkey 都被吃掉了,要測出哪個鍵還沒被用掉。

Cloudflare 發表 Logpush,把 log 傳出來

Cloudflare 推出了 Logpush,可以把 log 傳到 Amazon S3 或是 Google Cloud Storage 上 (目前就支援這兩個):「Logpush: the Easy Way to Get Your Logs to Your Cloud Storage」,目前是 enterprise 方案可以用:

Today, we’re excited to announce a new way to get your logs: Logpush, a tool for uploading your logs to your cloud storage provider, such as Amazon S3 or Google Cloud Storage. It’s now available in Early Access for Enterprise domains.

Logpush currently works with Amazon S3 and Google Cloud Storage, two of the most popular cloud storage providers.

以往是自己拉,需要考慮 HA 問題 (兩台小機器就可以了,機器成本是還好,但維護成本頗高),現在則是可以讓 Cloudflare 主動推上來...

我大多數的建議是 log 儘量留 (包括各種系統的 log 與 http access log)。主要是因為儲存成本一直下降 (S3 的 1TB 才 USD$23/month,如果使用 Glacier 會低到 USD$4/month),而且 log 在壓縮後會變小很多 (經常會有重複的字)。

最傳統的 AWStats 就可以看到不少資訊,如果是透過 IP address 與 User-agent,交叉配合其他 event data 就可以讓很多 machine learning 演算法取得的資訊更豐富。

Amazon EFS 的 IA Storage Class

Amazon EFS 一直找不太到好用的使用情境,因為 NFS 的關係所以大量 I/O 時的 latency issue 使得速度快不起來,而拿來堆 log 的成本又超級高...

最新推出的 storage class 則是透過提供低儲存成本的版本,解決了堆 log 這種使用情境:「New – Infrequent Access Storage Class for Amazon Elastic File System (EFS)」。

不過 EFS 不像 S3 可以直接選擇 storage class,是需要讓系統管理的:

開啟後 30 天沒有被碰過的檔案就會切過去:

Eligible Files – Files that are 128 KiB or larger and that have not been accessed or modified for at least 30 days can be transitioned to the new storage class. Modifications to a file’s metadata that do not change the file will not delay a transition.

而 latency 也會增加:

Files that have not been read or written for 30 days will be transitioned to the Infrequent Access storage class with no further action on your part. Files in the Standard Access class can be accessed with latency measured in single-digit milliseconds; files in the Infrequent Access class have latency in the double-digits.

us-east-1 為例子來說,Standard 是 USD$0.3/GB-month,而 IA 只要 USD$0.045/GB-month,但抓取時會有 USD$0.01/GB 的傳輸費用,可以看出價錢低不少。

不過文章裡沒提到什麼時候會把資料從 IA 跳回 Standard,可能得找機會問問看...

Mark Callaghan 花五分鐘介紹 LSM trees

實做 MyRocksMark Callaghan 花五分鐘在 CIDR 2019 上介紹 LSM tree:「Geek code for LSM trees」。翻了一下發現 CIDR 是兩年辦一次,跟之前遇過的 conference 不太一樣...

投影片在「Diversity of LSM tree shapes」這邊可以看到,在五分鐘內講完的前提下規劃出的重點...

AWS 要再推出更低價的 S3 Glacier Deep Archive

AWS 打算要推出 S3 的新產品,單價更低但反應時間更長的儲存方案:「Coming Soon – S3 Glacier Deep Archive for Long-Term Data Retention」。

設計的客群是對法規有要求的情況:

It is designed for customers — particularly those in highly-regulated industries, such as the Financial Services, Healthcare, and Public Sectors — that retain data sets for 7-10 years or longer to meet regulatory compliance requirements.

看起來是那種 log 丟著就不會去動的情境... (Write once never read?XD)

Amazon EFS 也要推出 IA 版本了 (Infrequent Access)

Amazon EFS 也要推出 IA (Infrequent Access) 版本了,Infrequent Access 指的是不常存取的資料:「Coming Soon – Amazon EFS Infrequent Access Storage Class」。

這剛好配合上很多人拿 Amazon EFS 來堆 log 的行為... AWS 是說有機會到省到 85%,不過應該是非常大的量才有機會有這個價錢?

EFS IA reduces storage costs for files not accessed every day, with savings up to 85% compared to the EFS Standard storage class.

其實用過 Amazon EFS 的人都對效能抱怨頗嚴重 (透過 NFS 有太多操作沒辦法 cache,於是 network latency issue 就出現了),堆 log 或是當作跨機器的空間大概是目前的主流用法...

Twitter 密碼中槍...

Twitter 發了公告請大家改密碼:「Keeping your account secure」。不只是 Twitter 自家的密碼,如果你有重複使用同一組密碼,也建議一起修改:

Out of an abundance of caution, we ask that you consider changing your password on all services where you’ve used this password.

雖然使用 bcrypt,但因為透過 log 記錄下了未加密的密碼,所以就中槍了:

We mask passwords through a process called hashing using a function known as bcrypt, which replaces the actual password with a random set of numbers and letters that are stored in Twitter’s system. This allows our systems to validate your account credentials without revealing your password. This is an industry standard.

Due to a bug, passwords were written to an internal log before completing the hashing process. We found this error ourselves, removed the passwords, and are implementing plans to prevent this bug from happening again.

這時候就要再推 Password manager 這種東西了,在每個站台都使用完全不同的密碼,可以降低這類問題帶來的衝擊...

把 git log 用得很開心...?

看到「git log – the Good Parts」這篇文章,裡面研究了 Gitgit log 的各種好用的功能,然後整理出來... (所以是 good parts XD)

作者用的參數是一個一個加上去,所以可以一個階段一個階段了解用途。除了可以用作者推薦的 repository 測試外,我建議大家拿個自己比較熟悉的 open source 專案來測 (有用到比較複雜的架構):

git log
git log --oneline
git log --oneline --decorate
git log --oneline --decorate --all
git log --oneline --decorate --all --graph

看到喜歡的部份可以在 ~/.gitconfig 裡設 alias 使用,像是用 git l 之類的?保留 git log 本身可以避免一些 script 用到這個指令時因為輸出格式跟預期不一樣而爛掉 XD

EnterpriseDB 打算推出的 zheap,想要解 VACUUM 問題...

前天被問到「DO or UNDO - there is no VACUUM」這篇,回家後仔細看一看再翻了一些資料,看起來是要往 InnoDB 的解法靠...

PostgreSQL 與 InnoDB 都是透過 MVCC 的概念實做 transaction 之間的互動,但兩者實際的作法不太一樣。其中帶來一個明顯的差異就是 PostgreSQL 需要 VACUUM。這點在同一篇作者八年前 (2011) 的文章就有提過兩者的差異以及優缺點:「MySQL vs. PostgreSQL, Part 2: VACUUM vs. Purge」。

UPDATE 時,InnoDB 會把新資料寫到表格內,然後把可能會被 rollback 的舊資料放到表格外:

In InnoDB, only the most recent version of an updated row is retained in the table itself. Old versions of updated rows are moved to the rollback segment, while deleted row versions are left in place and marked for future cleanup. Thus, purge must get rid of any deleted rows from the table itself, and clear out any old versions of updated rows from the rollback segment.

而被 DELETE 清除的資料則是由 purge thread 處理:

All the information necessary to find the deleted records that might need to be purged is also written to the rollback segment, so it's quite easy to find the rows that need to be cleaned out; and the old versions of the updated records are all in the rollback segment itself, so those are easy to find, too.

所以可以在 InnoDB 看到 purge thread 相關的設定:「MySQL :: MySQL 5.7 Reference Manual :: 14.6.11 Configuring InnoDB Purge Scheduling」,負責處理這些東西。

而在 PostgreSQL 的作法則是反過來,舊的資料放在原來地方,新資料另外存:

PostgreSQL takes a completely different approach. There is no rollback tablespace, or anything similar. When a row is updated, the old version is left in place; the new version is simply written into the table along with it.

新舊資料的位置其實還好,主要是因為沒有類似的地方可以記錄哪些要清:

Lacking a centralized record of what must be purged, PostgreSQL's VACUUM has historically needed to scan the entire table to look for records that might require cleanup.

這也使得 PostgreSQL 裡需要 autovacuum 之類的程序去掃,或是手動跑 vacuum。而在去年 (2017) 的文章裡也有提到目前還是類似的情況:「MVCC and VACUUM」。

而在今年 (2018) 的文章裡,EnterpriseDB 就提出了 zheap 的想法,在 UPDATE 時寫到 table 裡,把可能被 rollback 的資料放到 undo log 裡。其實就是把 InnoDB 那套方法拿過來用,只是整篇都沒提到而已 XD:

That brings me to the design which EnterpriseDB is proposing. We are working to build a new table storage format for PostgreSQL, which we’re calling zheap. In a zheap, whenever possible, we handle an UPDATE by moving the old row version to an undo log, and putting the new row version in the place previously occupied by the old one. If the transaction aborts, we retrieve the old row version from undo and put it back in the original location; if a concurrent transaction needs to see the old row version, it can find it in undo. Of course, this doesn’t work when the block is full and the row is getting wider, and there are some other problem cases as well, but it covers many useful cases. In the typical case, therefore, even bulk updates do not force a zheap to grow. Instead, the undo grows. When a transaction commits, all row versions that will become dead are in the undo, not the zheap.

不過馬上就會想到問題,如果要改善問題,不是個找地方記錄哪些位置要回收就好了嗎?順便改變方法是為了避免 fragment 嗎?

等著看之後變成什麼樣子吧...