Elsevier 限制加州大學的存取權限

三月的時候加州大學系統 (UC) 因為 Elsevier 不接受 open access 的條件而公開宣佈不續約 (參考「加州大學宣佈不與 Elsevier 續約」),後來 Elsevier 應該是試著看看有沒有機會繼續合作,所以在這段期間還是一直提供服務給加州大學系統。

前幾天在 Hacker News 上看到「Elsevier cuts off UC’s access to its academic journals (latimes.com)」,總算是確定要動手了:「In act of brinkmanship, a big publisher cuts off UC’s access to its academic journals」。

不過也不是直接拔掉,而是限制存取權,看不到新東西 (以 2019/01/01 為界):

As of Wednesday, Elsevier cut off access by UC faculty, staff and students to articles published since Jan. 1 in 2,500 Elsevier journals, including respected medical publications such as Cell and the Lancet and a host of engineering and scientific journals. Access to most material published in 2018 and earlier remains in force.

UC 提出的商業模式是讓投稿者負擔費用,而存取者不需要負擔,與現有的商業模式剛好相反。UC 提出的模式鼓勵「知識的散佈」,而現有的商業模式則是反過來,希望透過知識的散佈而賺~大~錢~發~大~財~:

UC demanded that the new contract reflect the principle of open access — that work produced on its campuses be available to all outside readers, for free.

That was a direct challenge to the business model of Elsevier and other big academic publishers. Traditionally, the publishers accept papers for publication for free but charge steep subscription fees. UC is determined to operate under an alternative model, in which researchers pay to have their papers published but not for subscriptions.

另外在 Hacker News 上的 comment 裡看到一些專案也正在進行,像是歐洲的「Plan S」也是在推動 open access:

The plan requires scientists and researchers who benefit from state-funded research organisations and institutions to publish their work in open repositories or in journals that are available to all by 2021.

另外「PubPub · Community Publishing」也是 open source 領域裡蠻有趣的計畫,後面看起來也有不少學術單位在支持。

二戰時德國坦克製造速度的估算問題

看到「The German Tank Problem」這篇在講二戰很有名的統計應用。這個主題在中文的維基百科寫得還蠻完整的,讀起來應該會更快一些:「德國坦克問題」:

在統計學理論的估計中,用不放回抽樣來估計離散型均勻分布最大值問題中著名的德國坦克問題(英語:German tank problem),它因在第二次世界大戰中用於估計德國坦克數量而得名。

如同上面所說的,這個方法是因為估算的準確度極高而知名:

對坦克車輪的分析產生了對使用中的車輪模具數量的估計。在與英國車輪製造商討論過後,他們估計了這麼多的模具可以生產多少車輪,進而是每個月可生產的坦克數量。對兩輛坦克(每輛32個車輪,總計64個車輪)車輪的分析的結果是1944年2月的生產數量估計在270左右,大大超出此前預期。

德國戰後公布的記錄顯示,1944年2月一個月的生產量是276輛。統計方法結果的精確度是常規情報收集方法所遠遠不能達到的,而「德國坦克問題」這個詞也成為了這種統計分析問題的標誌。

而且之後被拿來推敲經典的 Commodore 64 的數量也還蠻準的:

該公式在非軍事中也有使用,如估計Commodore 64計算機的總數,其結果(1.25億)與官方數字相當匹配。

GrabFood 用定位資料修正餐廳的資訊

Grab 的「How we harnessed the wisdom of crowds to improve restaurant location accuracy」這篇是他們的 data team 整理出來,如何使用既有的資料快速的修正餐廳資訊。裡面提到的方法不需要用到 machine learning,光是一些簡單的統計算法就可以快速修正現有的架構。

這些資訊其實是透過司機用的 driver app 蒐集來的,在 driver app 上有大量的資訊傳回伺服器 (像是定時回報的 GPS 位置,以及取餐狀態),而這些司機因為地緣關係,腦袋裡的資訊比地圖會準不少:

One of the biggest advantages we have is the huge driver-partner fleet we have on the ground in cities across Southeast Asia. They know the roads and cities like the back of their hand, and they are resourceful. As a result, they are often able to find the restaurants and complete orders even if the location was registered incorrectly.

所以透過這些資訊他們就可以反過來改善地圖資料,像是透過司機按下「取餐」的按鈕的地點與待的時間,就可以估算餐聽可能的位置,然後拿這個資訊比對地圖上的資料,就很容易發現搬家但是地圖上沒更新的情況:

Fraction of the orders where the pick-up location was not “at” the restaurant: This fraction indicates the number of orders with a pick-up location not near the registered restaurant location (with near being defined both spatially and temporally as above). A higher value indicates a higher likelihood of the restaurant not being in the registered location subject to order volume

Median distance between registered and estimated locations: This factor is used to rank restaurants by a notion of “importance”. A restaurant which is just outside the fixed radius from above can be addressed after another restaurant which is a kilometer away.

另外也有不少其他的改善 (像是必須在離餐聽某個距離內才能點「取餐」,這個「距離」會因為餐聽可能在室內商場而需要的調整),整個成果就會反應在訂單的取消率大幅下降:

整體看起來是系統產生清單後讓人工後續處理 (像是打電話去店家問?),但這個方式所提供的清單準確度應該很高 (因為司機不會沒事跟自己時間過不去,跑到奇怪地方按下取餐),用這些資料跑簡單的演算法就能夠快速修正不少問題...

加州大學宣佈不與 Elsevier 續約

加州大學 (這是一個大學系統,包括了十個校區,超過 25 萬的學生與 14 萬的教職員) 認為 Elsevier 沒有達到 open access 應有的標準,決定將不再跟 Elsevier 續約,並且發出新聞稿抨擊 Elsevier:「UC terminates subscriptions with world’s largest scientific publisher in push for open access to publicly funded research」。

As a leader in the global movement toward open access to publicly funded research, the University of California is taking a firm stand by deciding not to renew its subscriptions with Elsevier. Despite months of contract negotiations, Elsevier was unwilling to meet UC’s key goal: securing universal open access to UC research while containing the rapidly escalating costs associated with for-profit journals.

這應該是美國頂尖學院裡面的第一槍?後續會帶動多少單位不續訂...

歐洲研究機構的資助者推動研究論文的開放存取

在「Radical open-access plan could spell end to journal subscriptions」這邊看到歐洲 11 個研究機構資助者成立了「cOAlition S」,推動研究論文的開放存取。

目標是在 2020 年開始,由這些機構所資助的研究都必須投在符合完全開放條件的平台上:

cOAlition S signals the commitment to implement, by 1 January 2020, the necessary measures to fulfil its main principle: “By 2020 scientific publications that result from research funded by public grants provided by participating national and European research councils and funding bodies, must be published in compliant Open Access Journals or on compliant Open Access Platforms.

而現在大約只有 15%:

According to a December 2017 analysis, only around 15% of journals publish work immediately as open access (see ‘Publishing models’) — financed by charging per-article fees to authors or their funders, negotiating general open-publishing contracts with funders, or through other means.

用這種方式降低那些收錢才能下載的平台的影響力...

Elsevier 讓德國的研究機構在還沒有續約的情況下繼續使用

德國的研究機構在 2017 年年底前,也就是與 Elsevier 的合約到期前,還是沒有續約,但 Elsevier 決定還是先繼續提供服務,暫時性的為期一年,繼續談判:

The Dutch publishing giant Elsevier has granted uninterrupted access to its paywalled journals for researchers at around 200 German universities and research institutes that had refused to renew their individual subscriptions at the end of 2017.

The institutions had formed a consortium to negotiate a nationwide licence with the publisher. They sought a collective deal that would give most scientists in Germany full online access to about 2,500 journals at about half the price that individual libraries have paid in the past. But talks broke down and, by the end of 2017, no deal had been agreed. Elsevier now says that it will allow the country’s scientists to access its paywalled journals without a contract until a national agreement is hammered out.

Elsevier 會這樣做主要是要避免讓德國的學術機構發現「沒有 Elsevier 其實也活的很好」。而不少研究人員已經知道這件事情,在大多數的情況下都有 Elsevier 的替代方案,不需要浪費錢簽那麼貴的費用:

Günter Ziegler, a mathematician at the Free University of Berlin and a member of the consortium's negotiating team, says that German researchers have the upper hand in the negotiations. “Most papers are now freely available somewhere on the Internet, or else you might choose to work with preprint versions,” he says. “Clearly our negotiating position is strong. It is not clear that we want or need a paid extension of the old contracts.”

替代方案有幾個方面,像是自由開放下載的 arXiv 愈來愈受到重視,很多研究者都會把投稿的論文在上面放一份 pre-print 版本 (甚至會更新),而且近年來有些知名的證明只放在上面 (像是 Poincaré conjecture)。而且放在人家家裡比放在自己網站來的簡單 (不需要自己維護),這都使得 arXiv 變成學術界新的標準平台。

除了 arXiv 外,其他領域也有自己習慣的平台。像是密碼學這邊的「Cryptology ePrint Archive」也運作很久了。

除了找平台外,放在自家網站上的論文 (通常是學校或是學術機構的個人空間),也因為搜尋引擎的發達,使得大家更容易找到對應檔案可以下載。

而且更直接的攻擊性網站是 Sci-Hub,讓大家從 paywall 下載後丟上去公開讓人搜尋。雖然因為常常被封鎖的原因而常常在換網址,不過透過 Tor Browser (或是自己設定 Tor Proxy) 存取他們的 Hidden Service 就應該沒這個問題。

希望德國可以撐下去,證明其實已經不需要 Elsevier...

Facebook 自己找人研究,Social Media 是否對人類有害 XDDD

之前看到「Hard Questions: Is Spending Time on Social Media Bad for Us?」這篇,一直不知道要怎麼吐槽... 然後看到 Twitter 上的這則 tweet XDDD

既視感太重了,找了一下其他行業對應的資料:

真的不知道怎麼吐槽 XDDD

英文母語的人學習外語所需的時間

這個報導蠻有趣的:「A Map Showing How Much Time It Takes to Learn Foreign Languages: From Easiest to Hardest」。

文章裡分成這幾種:(扣除已經是母語的英語,Category 0)

  • Category I: 23-24 weeks (575-600 hours) Languages closely related to English
  • Category II: 30 weeks (750 hours) Languages similar to English
  • Category III: 36 weeks (900 hours) Languages with linguistic and/or cultural differences from English
  • Category IV: 44 weeks (1100 hours) Languages with significant linguistic and/or cultural differences from English
  • Category V: 88 weeks (2200 hours) Languages which are exceptionally difficult for native English speakers

其中時間最長的這部份引用一下:

Arabic
Cantonese (Chinese)
Mandarin (Chinese)
*Japanese
Korean
* Usually more difficult than other languages in the same category.

IBM 的 50 qubit quantum computer

IBM 在展示他們做到了什麼:「IBM makes 20 qubit quantum computing machine available as a cloud service」。

不過重點應該在目前已經拉出 50 qubit prototype 了:

The company also announced that IBM researchers had successfully built a 50 qubit prototype, which is the next milestone for quantum computing, but it’s unclear when we will see this commercially available.

18 個月從 5 qubit 到 20 qubit:

IBM has been offering quantum computing as a cloud service since last year when it came out with a 5 qubit version of the advanced computers. Today, the company announced that it’s releasing 20-qubit quantum computers, quite a leap in just 18 months. A qubit is a single unit of quantum information.

如果是以這樣的速度成長 (每 18 個月變成原來四倍),五年後就有機會殺 RSA 2048 bits 了?(大約需要 4000 個 qubit)

這比想像中快好多,難怪現在密碼學都在討論抵抗 quantum computer 的演算法...

AlphaGo Zero 的計算量

AlphaGo Zero 論文裡有提到,用同樣的硬體 (4 TPU) 可以用 89:11 碾壓 AlphaGo Master (今年年初與柯潔下的那個版本),主要是得力於更高品質的 neural network 以及更強的選擇能力 (後面這塊應該是將兩個 nerual network 簡化為一後的好處):

This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration.

那麼對應的問題就會冒出來了,究竟 DeepMind 花了多少時間才能訓練出這個新的 nerual network?結果吳毅成教授在 Facebook 上先估算出來了:

這邊的 TPU 對 GPU 的推估應該是基於當時 Google 在說明 TPU 的部份「An in-depth look at Google’s first Tensor Processing Unit (TPU)」:

In short, we found that the TPU delivered 15–30X higher performance and 30–80X higher performance-per-watt than contemporary CPUs and GPUs.

用 GPU 大約是 12K 顆,反推回 TPU 大約也是千顆這個數量左右。而這個數量以目前已經將 TPU 商用化的 Google 來看應該是很輕鬆,只能說有錢真好 XD:

1. 從另外一個角度看, DeepMind 僅40天就可以把 40-block 版本練起來, 換算一下, DeepMind 等於用了約12000顆 1080 Ti.