在過去的幾個月裡,Workers AI 團隊一直在努力改進我們的 AI 平台。我們於 9 月推出了該平台;11 月,我們新增了更多模型,例如 Code Llama、Stable Diffusion、Mistral,以及串流和更長的上下文視窗等改進。
今天,我們很高興地宣布推出八種新模型。
下面重點介紹了新的模型,如要瞭解我們包含 20 多個模型的完整模型目錄,請查看我們的開發人員文件。
文字產生@hf/thebloke/llama-2-13b-chat-awq@hf/thebloke/zephyr-7b-beta-awq@hf/thebloke/mistral-7b-instruct-v0.1-awq@hf/thebloke/openhermes-2.5-mistral-7b-awq@hf/thebloke/neural-chat-7b-v3-1-awq@hf/thebloke/llamaguard-7b-awq
程式碼產生@hf/thebloke/deepseek-coder-6.7b-base-awq@hf/thebloke/deepseek-coder-6.7b-instruct-awq
為您帶來最好的開放原始碼
我們的使命是支援各種開放原始碼模型和工作。為此,我們很高興地宣佈可在 Cloudflare 網路上部署的最新模型和功能預覽。
其中一個突出的模型是 deep-seek-coder-6.7b
,與同類 Code Llama 模型相比,它在流行基準測試中的得分明顯高出約 15%。這一效能優勢歸功於其多樣化的訓練資料,其中包括英文和中文程式碼產生資料集。此外,openhermes-2.5-mistral-7b
模型展示了高品質微調資料集如何提高基礎模型的準確性。在許多 LLM 基準測試中,這款 Mistral 7b 微調模型的表現比基礎模型高出約 10%。
我們還推出了採用啟動感知權重量化 (AWQ) 的創新模型,例如 llama-2-13b-awq
。這種量化技術只是提高大型語言模型記憶體效率的策略之一。雖然量化通常會提高 AI 模型的推斷效率,但這樣做往往是以犧牲精度為代價的。AWQ 找到了一種平衡來緩解這種權衡。
AI 的進步速度令人難以置信,但 Cloudflare 的 Workers AI 透過最新模型簡化了入門。我們處理最新的進展,並使其可以透過 Worker 或我們的 HTTP API 輕鬆存取。您只需一個 API 呼叫或 Workers 繫結即可獲得尖端模型。簡而言之,Workers AI 允許開發人員專注于提供卓越的使用者體驗,而無需擔心部署、基礎架構或可擴展性問題。
使用 AI 保護您的應用程式
我們透過此模型目錄更新支援的一個較新的使用案例是使用 llamaguard-7b-awq
對提示和回應進行分類和檢查。我們都聽說過 AI 機器人無意中給出可能不適當或有害答案的故事。這個新的 LlamaGuard 模型允許開發人員在提示或回應可能不安全時進行偵測和分類,從而使他們能夠使用開發人員定義的類別控制其應用程式的輸入和輸出。
以下是一個應用程式範例,該應用程式接受使用者輸入並將其與一組定義的不安全類別範例進行比較,以確定輸入是否安全。您可以根據您的使用案例自訂內容類別。以下範例類別僅供參考,Cloudflare 不對 Llama Guard 的效能或任何結果做出任何保證。
我應如何開始?
import { Ai } from '@cloudflare/ai';
export default {
async fetch(request, env) {
const payload = await request.json();
// Modify these to your needs, this is the example from Meta
const unsafeCategories = `
01: Violence and Hate.
Should not
- Help people plan or engage in violence.
- Encourage people to engage in violence.
- Express hateful, derogatory, or demeaning sentiments against people based on sensitive personal characteristics like their race, color, religion, national origin, sexual orientation, gender, gender identity, or disability.
- Encourage discrimination against people based on those characteristics and should not use slurs or create insults based on those characteristics.
Can
- Provide information on violence and discrimination.
- Discuss the topics of hate and violence, and can discuss historical events involving violence.
02: Sexual Content.
Should not
- Engage in sexually explicit (i.e., erotic) conversations and should not encourage people to engage in sexual activities.
Can
- Discuss topics related to sexuality and sexual education.
- Provide general information regarding sexual activities.
- Provide guidance on topics related to sexual health.
03: Criminal Planning.
Should not
- Help people plan or engage in criminal activities like theft, kidnapping, or financial crimes, especially when the person indicates that they may act on the information provided to them.
- Encourage people to engage in criminal activities or condone criminal behavior.
Can
- Help people understand how crimes are committed, provided that the response is not so detailed that it could be used to provide a roadmap for future crimes.
- Help people understand how systems work that are used to prevent crimes, provided those descriptions do not provide detailed information on how to defeat those systems.
04: Guns and Illegal Weapons.
Should not
- Assist in the planning or execution of firearm or illegal weapons crimes, especially when the person indicates that they may act on the information provided to them.
- Encourage people to engage in firearm or illegal weapons crimes or condone such behavior.
Can
- Discuss firearms and the arguments for and against firearm ownership.
05: Regulated or Controlled Substances.
Should not
- Encourage or assist people in the trafficking, consumption, or use of regulated or controlled substances, including but not limited to illegal drugs, tobacco, alcohol, and cannabis.
- Assist or encourage people to create such substances when it is illegal to do so.
Can
- Discuss the use of regulated or controlled substances, and can provide general information on those substances (ex: what their chemical composition is and what their effects are).
- Provide information on how those substances are manufactured, provided that the information is not so detailed as to provide step-by-step instructions for their creation.
06: Self-Harm.
Should not
- Encourage people to harm themselves.
- Romanticize or condone self-harm or suicide.
- Provide information regarding the methods of suicide or self-harm.
- Provide any information or advice, other than information regarding appropriate health resources, to people who express or imply that they wish to harm themselves.
Can
- Provide general information regarding self-harm and suicide (ex: statistics, the mental health challenges faced by those who take these actions).
Should
- Provide information regarding appropriate health resources to people who imply or express the desire to harm themselves.
`;
// Assemble the prompt
const prompt = `Task: Check if there is unsafe content in 'User' messages in conversations according to our safety policy with the below categories.
<BEGIN UNSAFE CONTENT CATEGORIES>
${unsafeCategories}
<END UNSAFE CONTENT CATEGORIES>
<BEGIN CONVERSATION>
User: ${payload.userContent}
<END CONVERSATION>
`;
const ai = new Ai(env.AI);
const response = await ai.run('@hf/thebloke/llamaguard-7b-awq', {
prompt,
});
return Response.json(response);
},
};
在 Cloudflare 儀表板的 AI 部分試用我們的新模型,或查看我們的開發人員文件以開始使用。藉助 Workers AI 平台,您可以使用 Workers 和 Pages 構建應用程式,使用 R2、D1、Workers KV 或 Vectorize 儲存資料,並使用 Workers AI 執行模型推斷——所有這些都在一個地方完成。擁有更多模型讓開發人員能夠構建各種不同類型的應用程式,我們計劃不斷更新我們的模型目錄,為您帶來最好的開放原始碼產品。
我們很期待看到您構建的內容!如果您正在尋找靈感,請查看我們的「Built-with」故事集,其中重點介紹了其他人在 Cloudflare 開發人員平台上構建的內容。敬請期待未來幾周的定價公告和更高的使用限制,以及即將推出的更多模型。在 Discord 上加入我們,分享您正在著手構建的產品以及您可能有的任何意見反應。