ChatGPT 提供正式付費的 API

OpenAI 公佈了 ChatGPT 的付費 API 了:「Introducing ChatGPT and Whisper APIs」。

比較意外的是這次的 model 價錢直接比 text-davinci-003 (GPT-3.5) 少了 90%,也就是直接 1/10 的價錢:

Model: The ChatGPT model family we are releasing today, gpt-3.5-turbo, is the same model used in the ChatGPT product. It is priced at $0.002 per 1k tokens, which is 10x cheaper than our existing GPT-3.5 models. It’s also our best model for many non-chat use cases—we’ve seen early testers migrate from text-davinci-003 to gpt-3.5-turbo with only a small amount of adjustment needed to their prompts.

看起來基本的架構是相容的,現有的 text-davinci-003 轉到 gpt-3.5-turbo 看起來不用花太多功夫?不過 API 是不同隻,不能直接轉:

We’ve created a new endpoint to interact with our ChatGPT models[.]

從 Python bindings 可以看到新的用法:

import openai

completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Tell the world about the ChatGPT API in the style of a pirate."}]
)

print(completion)

這樣就真的就可以想像得到很多 startup 的輪替了...

用 AI 模型判斷是否為 AI 產生的文字

OpenAI 放出了新的 model,可以用來判斷是否為 AI 產生的文字:「New AI classifier for indicating AI-written text」。

但目前的成效其實還是不太行,只以英文的成效來看,true positive 只有 26%,而 false positive 是 9%:

In our evaluations on a “challenge set” of English texts, our classifier correctly identifies 26% of AI-written text (true positives) as “likely AI-written,” while incorrectly labeling human-written text as AI-written 9% of the time (false positives).

另外也有提到弱點,像是比較短的內容機很難辨認:

The classifier is very unreliable on short texts (below 1,000 characters). Even longer texts are sometimes incorrectly labeled by the classifier.

然後就是有正確答案的內容也很難辨認,因為正確答案幾乎都是一樣的:

Text that is very predictable cannot be reliably identified. For example, it is impossible to predict whether a list of the first 1,000 prime numbers was written by AI or humans, because the correct answer is always the same.

另外題到了技術上的限制,現在的方法比較像是「辨認是不是從某些 corpus 訓練出來的 model,所產生的文字」,而非通用性的 AI 文字偵測:

Classifiers based on neural networks are known to be poorly calibrated outside of their training data. For inputs that are very different from text in our training set, the classifier is sometimes extremely confident in a wrong prediction.

看起來是還不到可以用的程度...

OpenAI 推出 ChatGPT Plus

OpenAI 提出了 ChatGPT 的付費方案:「Introducing ChatGPT Plus」。

目前只開美國:

ChatGPT Plus is available to customers in the United States, and we will begin the process of inviting people from our waitlist over the coming weeks. We plan to expand access and support to additional countries and regions soon.

公告的價錢是 US$20/mo,基本上就是保證使用權。這跟之前有傳言 US$42/mo 叫 Professional 的方案低了不少:「ChatGPT users report $42 a month pricing for ‘pro’ access but no official announcement yet」:

The new subscription plan, ChatGPT Plus, will be available for $20/month, and subscribers will receive a number of benefits:

  • General access to ChatGPT, even during peak times
  • Faster response times
  • Priority access to new features and improvements

應該是會訂起來用,光是現在 free tier 就已經找到一些常用的模式,可以省下不少時間...

用 DALL·E 2 的圖當作網誌文章的圖片

Hacker News 上看到「I replaced all our blog thumbnails using DALL·E 2 (deephaven.io)」這個點子,原文在「I replaced all our blog thumbnails using DALL·E 2 for $45: here’s what I learned」這邊。

網誌文章如果包含好的圖片時,曝光度與互動都會比較多。所以作者就想到用 OpenAIDALL·E 2 來搞事了:給個描述,請 DALL·E 2 生成圖片。

文章裡面有很多產生出來的圖都蠻有趣的,像是「a cute blue colored gopher with blue fur programming on multiple monitors displaying many spreadsheets, digital art」這個描述生出來的圖:

不過不算便宜,他花了 US$45 生成大約一百篇文章的圖:

I spent the weekend and $45 in OpenAi credits generating new thumbnails that better represent the content of all 100+ posts from our blog.

如果用先前「玩玩文字轉圖片的 min(DALL·E)」這邊提到的方法自己搞不知道可不可行?

GitHub Copilot 產生出來程式的安全性問題

看到「Encoding data for POST requests」這篇大家才回頭注意到 GitHub Copilot 首頁的範例本身就有安全漏洞:

async function isPositive(text: string): Promise<boolean> {
  const response = await fetch(`http://text-processing.com/api/sentiment/`, {
    method: "POST",
    body: `text=${text}`,
    headers: {
      "Content-Type": "application/x-www-form-urlencoded",
    },
  });
  const json = await response.json();
  return json.label === "pos";
}

其中 text=${text} 是一個 injection 類的漏洞,首頁的範例應該是被挑過的,但仍然出現了這個嚴重的問題,從這邊可以看出 GitHubOpenAI 在這條線上的問題...

GitHub 與 OpenAI 合作推出的 GitHub Copilot

Hacker News 首頁上的第一名看到 GitHubOpenAI 合作推出了 GitHub Copilot,對應的討論可以在「GitHub Copilot: your AI pair programmer (copilot.github.com)」這邊看到。

GitHub Copilot 會猜測你接下來會想要寫的「完整片段」,像是這樣:

不過 Hacker News 上面的討論有參與 alpha 測試的人的評價,大概 1/10 機率會猜對,即使如此,他還是給了很多有用的資訊 (像是函式與變數的名稱):

fzaninotto

I've been using the alpha for the past 2 weeks, and I'm blown away. Copilot guesses the exact code I want to write about one in ten times, and the rest of the time it suggests something rather good, or completely off. But when it guesses right, it feels like it's reading my mind.

It's really like pair programming, even though I'm coding alone. I have a better understanding of my own code, and I tend to give better names and descriptions to my methods. I write better code, documentation, and tests.

Copilot has made me a better programmer. No kidding. This is a huge achievement. Kudos to the GitHub Copilot team!

然後也有人笑稱總算找到理由寫 comment 了:

pfraze

They finally did it. They finally found a way to make me write comments

反過來的另外一個大問題就是 copyright,這點在目前的問答集沒看到... 在 Hacker News 裡面的討論有提到這點,但目前沒有完整的定論。

目前只支援 VSCode,以後也許會有機會透過 LSP 支援其他的編輯器?

另外我想到 Kite 這個 machine learning 的 auto complete 工具,沒有那麼強大但也還不錯?

假新聞產生器與偵測器

Hacker News 上看到的消息,是關於「使用類神經網路產生新聞」(也就是透過程式大量產生假新聞),這次的結果包括了「產生」與「偵測」兩個面向:「Grover – A State-of-the-Art Defense Against Neural Fake News (allenai.org)」。

實驗的網站在「Grover - A State-of-the-Art Defense against Neural Fake News」這邊,另外也有論文「Defending Against Neural Fake News」可以讀。

幾個月前,OpenAI 利用類神經網路,研發出「自動寫新聞」的程式,當時他們宣稱因為效果太好,決定不完整公開成果:「Better Language Models and Their Implications」,中文的報導可以參考 iThome 這篇:「AI文字產生技術引發假新聞爭議,OpenAI決定只公開部份技術成果」。

而現在 The Allen Institute for Artificial Intelligence 則是成功重製了 OpenAI 的成果,取名叫 Grover,發現訓練出來的模型除了可以拿來寫新聞外,也可以拿來偵測文章是不是機器產生的,而且就他們自己測試,辨識成功率還蠻高的:

To study and detect neural fake news, we built a model named Grover. Our study presents a surprising result: the best way to detect neural fake news is to use a model that is also a generator. The generator is most familiar with its own habits, quirks, and traits, as well as those from similar AI models, especially those trained on similar data, i.e. publicly available news. Our model, Grover, is a generator that can easily spot its own generated fake news articles, as well as those generated by other AIs. In a challenging setting with limited access to neural fake news articles, Grover obtains over 92% accuracy at telling apart human-written from machine-written news. Please read our publication for more information.

不過看起來 source code 與 model 還是沒放出來,但看起來遲早會有對應的 open source clone...

我想到在攻殼電視動畫裡面的情報管制戰,雖然電視動畫裡沒有講得很詳細,但感覺這類工具就是其中一環...