
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Fri, 10 Apr 2026 00:09:04 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Running fine-tuned models on Workers AI with LoRAs]]></title>
            <link>https://blog.cloudflare.com/fine-tuned-inference-with-loras/</link>
            <pubDate>Tue, 02 Apr 2024 13:00:48 GMT</pubDate>
            <description><![CDATA[ Workers AI now supports fine-tuned models using LoRAs. But what is a LoRA and how does it work? In this post, we dive into fine-tuning, LoRAs and even some math to share the details of how it all works under the hood ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/70HGv5JY8CcvWM8IesAxt0/5b60789faf49d45cd4f54e6dd8e4efd4/loraai.png" />
            
            </figure>
    <div>
      <h3>Inference from fine-tuned LLMs with LoRAs is now in open beta</h3>
      <a href="#inference-from-fine-tuned-llms-with-loras-is-now-in-open-beta">
        
      </a>
    </div>
    <p>Today, we’re excited to announce that you can now run fine-tuned inference with LoRAs on Workers AI. This feature is in open beta and available for pre-trained LoRA adapters to be used with Mistral, Gemma, or Llama 2, with some limitations. Take a look at our <a href="/workers-ai-ga-huggingface-loras-python-support/">product announcements blog post</a> to get a high-level overview of our Bring Your Own (BYO) LoRAs feature.</p><p>In this post, we’ll do a deep dive into what fine-tuning and LoRAs are, show you how to use it on our Workers AI platform, and then delve into the technical details of how we implemented it on our platform.</p>
    <div>
      <h2>What is fine-tuning?</h2>
      <a href="#what-is-fine-tuning">
        
      </a>
    </div>
    <p>Fine-tuning is a general term for modifying an AI model by continuing to train it with additional data. The goal of fine-tuning is to increase the probability that a generation is similar to your dataset. Training a model from scratch is not practical for many use cases given how expensive and time consuming they can be to train. By fine-tuning an existing pre-trained model, you benefit from its capabilities while also accomplishing your desired task. <a href="https://www.cloudflare.com/learning/ai/what-is-lora/">Low-Rank Adaptation (LoRA)</a> is a specific fine-tuning method that can be applied to various model architectures, not just LLMs. It is common that the pre-trained model weights are directly modified or fused with additional fine-tune weights in traditional fine-tuning methods. LoRA, on the other hand, allows for the fine-tune weights and pre-trained model to remain separate, and for the pre-trained model to remain unchanged. The end result is that you can train models to be more accurate  at specific tasks, such as generating code, having a specific personality, or generating images in a specific style. You can even fine-tune an existing <a href="https://www.cloudflare.com/learning/ai/what-is-large-language-model/">LLM</a> to understand additional information about a specific topic.</p><p>The approach of maintaining the original base model weights means that you can create new fine-tune weights with relatively little compute. You can take advantage of existing foundational models (such as Llama, Mistral, and Gemma), and adapt them for your needs.</p>
    <div>
      <h2>How does fine-tuning work?</h2>
      <a href="#how-does-fine-tuning-work">
        
      </a>
    </div>
    <p>To better understand fine-tuning and why LoRA is so effective, we have to take a step back to understand how AI models work. AI models (like LLMs) are neural networks that are trained through deep learning techniques. In neural networks, there are a set of parameters that act as a mathematical representation of the model’s domain knowledge, made up of weights and biases – in simple terms, numbers. These parameters are usually represented as large matrices of numbers. The more parameters a model has, the larger the model is, so when you see models like llama-2-7b, you can read “7b” and know that the model has 7 billion parameters.</p><p>A model’s parameters define its behavior. When you train a model from scratch, these parameters usually start off as random numbers. As you train the model on a dataset, these parameters get adjusted bit-by-bit until the model reflects the dataset and exhibits the right behavior. Some parameters will be more important than others, so we apply a weight and use it to show more or less importance. Weights play a crucial role in the model's ability to capture patterns and relationships in the data it is trained on.</p><p>Traditional fine-tuning will adjust <i>all</i> the parameters in the trained model with a new set of weights. As such, a fine-tuned model requires us to serve the same amount of parameters as the original model, which means it can take a lot of time and compute to train and run inference for a fully fine-tuned model. On top of that, new state-of-the-art models, or versions of existing models, are regularly released, meaning that fully fine-tuned models can become costly to train, maintain, and store.</p>
    <div>
      <h2>LoRA is an efficient method of fine-tuning</h2>
      <a href="#lora-is-an-efficient-method-of-fine-tuning">
        
      </a>
    </div>
    <p>In the simplest terms, LoRA avoids adjusting parameters in a pre-trained model and instead allows us to apply a small number of additional parameters. These additional parameters are applied temporarily to the base model to effectively control model behavior. Relative to traditional fine-tuning methods it takes a lot less time and compute to train these additional parameters, which are referred to as a LoRA adapter. After training, we package up the LoRA adapter as a separate model file that can then plug in to the base model it was trained from. A fully fine-tuned model can be tens of gigabytes in size, while these adapters are usually just a few megabytes. This makes it a lot easier to distribute, and serving fine-tuned inference with LoRA only adds ms of latency to total inference time.</p><p>If you’re curious to understand why LoRA is so effective, buckle up — we first have to go through a brief lesson on linear algebra. If that’s not a term you’ve thought about since university, don’t worry, we’ll walk you through it.</p>
    <div>
      <h2>Show me the math</h2>
      <a href="#show-me-the-math">
        
      </a>
    </div>
    <p>With traditional fine-tuning, we can take the weights of a model (<i>W0</i>) and tweak them to output a new set of weights — so the difference between the original model weights and the new weights is <i>ΔW</i>, representing the change in weights_._ Therefore, a tuned model will have a new set of weights which can be represented as the original model weights plus the change in weights, <i>W0</i> + <i>ΔW.</i></p><p>Remember, all of these model weights are actually represented as large matrices of numbers. In math, every matrix has a property called rank (<i>r</i>), which describes the number of linearly independent columns or rows in a matrix. When matrices are low-rank, they have only a few columns or rows that are “important”, so we can actually decompose or split them into two smaller matrices with the most important parameters  (think of it like factoring in algebra). This technique is called rank decomposition, which allows us to greatly reduce and simplify matrices while keeping the most important bits. In the context of fine-tuning, rank determines how many parameters get changed from the original model – the higher the rank, the stronger the fine-tune, giving you more granularity over the output.</p><p>According to the <a href="https://arxiv.org/abs/2106.09685">original LoRA paper</a>, researchers have found that when a model is low-rank, the matrix representing the change in weights is also low-rank. Therefore, we can apply rank decomposition to our matrix representing the change in weights <i>ΔW</i> to create two smaller matrices <i>A, B</i>, where <i>ΔW = BA</i>. Now, the change in the model can be represented by two smaller low-rank matrices_._ This is why this method of fine-tuning is called Low-Rank Adaptation.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/13k5puOQpL75CRNCTv5ZYE/309e49ef14cc3ef7a3493c5ad75c1f97/Lora-lineapro.png" />
            
            </figure><p>When we run inference, we only need the smaller matrices <i>A, B</i> to change the behavior of the model. The model weights in <i>A, B</i> constitute our LoRA adapter (along with a config file). At runtime, we add the model weights together, combining the original model (<i>W0</i>) and the LoRA adapter (<i>A, B)</i>. Adding and subtracting are simple mathematical operations, meaning that we can quickly swap out different LoRA adapters by adding and subtracting <i>A, B</i> from <i>W0.</i>. By temporarily adjusting the weights of the original model, we modify the model’s behavior and output and as a result, we get fine-tuned inference with minimal added latency.</p><p>According to the original <a href="https://arxiv.org/abs/2106.09685">LoRA paper</a>, “LoRA can reduce the number of trainable parameters by 10,000 times and the GPU memory requirement by 3 times”. Because of this, LoRA is one of the most popular methods of fine-tuning since it's a lot less computationally expensive than a fully fine-tuned model, doesn't add any material inference time, and is much smaller and portable.</p>
    <div>
      <h2>How can you use LoRAs with Workers AI?</h2>
      <a href="#how-can-you-use-loras-with-workers-ai">
        
      </a>
    </div>
    <p>Workers AI is very well-suited to run LoRAs because of the way we run serverless inference. The models in our catalog are always pre-loaded on our GPUs, meaning that we keep them warm so that your requests never encounter a cold start. This means that the base model is always available, and we can dynamically load and swap out LoRA adapters as needed. We can actually plug in multiple LoRA adapters to one base model, so we can serve multiple different fine-tuned inference requests at once.</p><p>When you fine-tune with LoRA, your output will be two files: your custom model weights (in <a href="https://huggingface.co/docs/safetensors/en/index">safetensors</a> format) and an adapter config file (in json format). To create these weights yourself, you can train a LoRA on your own data using the <a href="https://huggingface.co/docs/peft/en/tutorial/peft_model_config">Hugging Face PEFT</a> (Parameter-Efficient Fine-Tuning) library combined with the <a href="https://huggingface.co/docs/autotrain/en/llm_finetuning">Hugging Face AutoTrain LLM library</a>. You can also run your training tasks on services such as <a href="https://huggingface.co/autotrain">Auto Train</a> and <a href="https://colab.research.google.com/">Google Colab</a>. Alternatively, there are many open-source LoRA adapters <a href="https://huggingface.co/models?pipeline_tag=text-generation&amp;sort=trending&amp;search=mistral+lora">available on Hugging Face</a> today that cover a variety of use cases.</p><p>Eventually, we want to support the LoRA training workloads on our platform, but we’ll need you to bring your trained LoRA adapters to Workers AI today, which is why we’re calling this feature Bring Your Own (BYO) LoRAs.</p><p>For the initial open beta release, we are allowing people to use LoRAs with our Mistral, Llama, and Gemma models. We have set aside versions of these models which accept LoRAs, which you can access by appending <code>-lora</code> to the end of the model name. Your adapter must have been fine-tuned from one of our supported base models listed below:</p><ul><li><p><code>@cf/meta-llama/llama-2-7b-chat-hf-lora</code></p></li><li><p><code>@cf/mistral/mistral-7b-instruct-v0.2-lora</code></p></li><li><p><code>@cf/google/gemma-2b-it-lora</code></p></li><li><p><code>@cf/google/gemma-7b-it-lora</code></p></li></ul><p>As we are launching this feature in open beta, we have some limitations today to take note of: quantized LoRA models are not yet supported, LoRA adapters must be smaller than 100MB and have up to a max rank of 8, and you can try up to 30 LoRAs per account during our initial open beta. To get started with LoRAs on Workers AI, read the <a href="https://developers.cloudflare.com/workers-ai/fine-tunes/loras">Developer Docs</a>.</p><p>As always, we expect people to use Workers AI and our new BYO LoRA feature with our <a href="https://www.cloudflare.com/service-specific-terms-developer-platform/#developer-platform-terms">Terms of Service</a> in mind, including any model-specific restrictions on use contained in the models’ license terms.</p>
    <div>
      <h2>How did we build multi-tenant LoRA serving?</h2>
      <a href="#how-did-we-build-multi-tenant-lora-serving">
        
      </a>
    </div>
    <p>Serving multiple LoRA models simultaneously poses a challenge in terms of GPU resource utilization. While it is possible to batch inference requests to a base model, it is much more challenging to batch requests with the added complexity of serving unique LoRA adapters. To tackle this problem, we leverage the Punica CUDA kernel design in combination with global cache optimizations in order to handle the memory intensive workload of multi-tenant LoRA serving while offering low inference latency.</p><p>The Punica CUDA kernel was introduced in the paper <a href="https://arxiv.org/abs/2310.18547">Punica: Multi-Tenant LoRA Serving</a> as a method to serve multiple, significantly different LoRA models applied to the same base model. In comparison to previous inference techniques, the method offers substantial throughput and latency improvements. This optimization is achieved in part through enabling request batching even across requests serving different LoRA adapters.</p><p>The core of the Punica kernel system is a new CUDA kernel called Segmented Gather Matrix-Vector Multiplication (SGMV). SGMV allows a GPU to store only a single copy of the pre-trained model while serving different LoRA models. The Punica kernel design system consolidates the batching of requests for unique LoRA models to improve performance by parallelizing the feature-weight multiplication of different requests in a batch. Requests for the same LoRA model are then grouped to increase operational intensity. Initially, the GPU loads the base model while reserving most of its GPU memory for KV Cache. The LoRA components (A and B matrices) are then loaded on demand from remote storage (Cloudflare’s cache or <a href="https://www.cloudflare.com/developer-platform/r2/">R2</a>) when required by an incoming request. This on demand loading introduces only milliseconds of latency, which means that multiple LoRA adapters can be seamlessly fetched and served with minimal impact on inference performance. Frequently requested LoRA adapters are cached for the fastest possible inference.</p><p>Once a requested LoRA has been cached locally, the speed it can be made available for inference is constrained only by PCIe bandwidth. Regardless, given that each request may require its own LoRA, it becomes critical that LoRA downloads and memory copy operations are performed asynchronously. The Punica scheduler tackles this exact challenge, batching only requests which currently have required LoRA weights available in GPU memory, and queueing requests that do not until the required weights are available and the request can efficiently join a batch.</p><p>By effectively managing KV cache and batching these requests, it is possible to handle significant multi-tenant LoRA-serving workloads. A further and important optimization is the use of continuous batching. Common batching methods require all requests to the same adapter to reach their stopping condition before being released. Continuous batching allows a request in a batch to be released early so that it does not need to wait for the longest running request.</p><p>Given that LLMs deployed to Cloudflare’s network are available globally, it is important that LoRA adapter models are as well. Very soon, we will implement remote model files that are cached at Cloudflare’s edge to further reduce inference latency.</p>
    <div>
      <h2>A roadmap for fine-tuning on Workers AI</h2>
      <a href="#a-roadmap-for-fine-tuning-on-workers-ai">
        
      </a>
    </div>
    <p>Launching support for LoRA adapters is an important step towards unlocking fine-tunes on our platform. In addition to the LLM fine-tunes available today, we look forward to supporting more models and a variety of task types, including image generation.</p><p>Our vision for Workers AI is to be the best place for developers to run their AI workloads — and this includes the process of fine-tuning itself. Eventually, we want to be able to run the fine-tuning training job as well as fully fine-tuned models directly on Workers AI. This unlocks many use cases for AI to be more relevant in organizations by empowering models to have more granularity and detail for specific tasks.</p><p>With AI Gateway, we will be able to help developers log their prompts and responses, which they can then use to fine-tune models with production data. Our vision is to have a one-click fine-tuning service, where log data from AI Gateway can be used to retrain a model (on Cloudflare) and then the fine-tuned model can be redeployed on Workers AI for inference. This will allow developers to personalize their AI models to fit their applications, allowing for granularity as low as a per-user level. The fine-tuned model can then be smaller and more optimized, helping users save time and money on AI inference – and the magic is that all of this can all happen within our very own <a href="https://www.cloudflare.com/developer-platform/">Developer Platform</a>.</p><p>We’re excited for you to try the open beta for BYO LoRAs! Read our <a href="https://developers.cloudflare.com/workers-ai/fine-tunes">Developer Docs</a> for more details, and tell us what you think on <a href="https://discord.cloudflare.com">Discord</a>.</p> ]]></content:encoded>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Workers AI]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">4YpkeROwzr0CCHeFmwolIF</guid>
            <dc:creator>Michelle Chen</dc:creator>
            <dc:creator>Logan Grasby</dc:creator>
        </item>
        <item>
            <title><![CDATA[Unlocking new use cases with 17 new models in Workers AI, including new LLMs, image generation models, and more]]></title>
            <link>https://blog.cloudflare.com/february-28-2024-workersai-catalog-update/</link>
            <pubDate>Wed, 28 Feb 2024 20:00:00 GMT</pubDate>
            <description><![CDATA[ In February 2024 we added 8 models for text generation, classification, and code generation use cases. Today, we’re back with 17 more models, focused on enabling new types of tasks and use cases
 ]]></description>
            <content:encoded><![CDATA[ <p></p><p>On February 6th, 2024 we <a href="/february-2024-workersai-catalog-update">announced eight new models</a> that we added to our catalog for text generation, classification, and code generation use cases. Today, we’re back with seventeen (17!) more models, focused on enabling new types of tasks and use cases with Workers AI. Our catalog is now nearing almost 40 models, so we also decided to introduce a revamp of our developer documentation that enables users to easily search and discover new models.</p><p>The new models are listed below, and the full Workers AI catalog can be found on our <a href="https://developers.cloudflare.com/workers-ai/models/">new developer documentation</a>.</p><p><b>Text generation</b></p><ul><li><p><b>@cf/deepseek-ai/deepseek-math-7b-instruct</b></p></li><li><p>******@**<b>cf/openchat/openchat-3.5-0106</b></p></li><li><p><b>@cf/microsoft/phi-2</b></p></li><li><p><b>@cf/tinyllama/tinyllama-1.1b-chat-v1.0</b></p></li><li><p><b>@cf/thebloke/discolm-german-7b-v1-awq</b></p></li><li><p><b>@cf/qwen/qwen1.5-0.5b-chat</b></p></li><li><p><b>@cf/qwen/qwen1.5-1.8b-chat</b></p></li><li><p><b>@cf/qwen/qwen1.5-7b-chat-awq</b></p></li><li><p><b>@cf/qwen/qwen1.5-14b-chat-awq</b></p></li><li><p><b>@cf/tiiuae/falcon-7b-instruct</b></p></li><li><p><b>@cf/defog/sqlcoder-7b-2</b></p></li></ul><p><b>Summarization</b></p><ul><li><p><b>@cf/facebook/bart-large-cnn</b></p></li></ul><p><b>Text-to-image</b></p><ul><li><p><b>@cf/lykon/dreamshaper-8-lcm</b></p></li><li><p><b>@cf/runwayml/stable-diffusion-v1-5-inpainting</b></p></li><li><p><b>@cf/runwayml/stable-diffusion-v1-5-img2img</b></p></li><li><p><b>@cf/bytedance/stable-diffusion-xl-lightning</b></p></li></ul><p><b>Image-to-text</b></p><ul><li><p><b>@cf/unum/uform-gen2-qwen-500m</b></p></li></ul>
    <div>
      <h3>New language models, fine-tunes, and quantizations</h3>
      <a href="#new-language-models-fine-tunes-and-quantizations">
        
      </a>
    </div>
    <p>Today’s catalog update includes a number of new language models so that developers can pick and choose the best LLMs for their use cases. Although most LLMs can be generalized to work in any instance, there are many benefits to choosing models that are tailored for a specific use case. We are excited to bring you some new large language models (LLMs), small language models (SLMs), and multi-language support, as well as some fine-tuned and <a href="https://www.cloudflare.com/learning/ai/what-is-quantization/">quantized</a> models.</p><p>Our latest LLM additions include <code>falcon-7b-instruct</code>, which is particularly exciting because of its innovative use of multi-query attention to generate high-precision responses. There’s also better language support with <code>discolm_german_7b</code> and the <code>qwen1.5</code> models, which are trained on multilingual data and boast impressive LLM outputs not only in English, but also in German (<code>discolm</code>) and Chinese (<code>qwen1.5</code>). The Qwen models range from 0.5B to 14B parameters and have shown particularly impressive accuracy in our testing. We’re also releasing a few new SLMs, which are growing in popularity because of their ability to do inference faster and cheaper without sacrificing accuracy. For SLMs, we’re introducing small but performant models like a 1.1B parameter version of Llama (<code>tinyllama-1.1b-chat-v1.0</code>) and a 1.3B parameter model from Microsoft (<code>phi-2</code>).</p><p>As the AI industry continues to accelerate, talented people have found ways to improve and optimize the performance and accuracy of models. We’ve added a fine-tuned model (openchat-3.5) which implements <a href="https://arxiv.org/abs/2309.11235">Conditioned Reinforcement Learning Fine-Tuning (C-RLFT)</a>, a technique that enables open-source language model development through the use of easily collectable mixed quality data.</p><p>We’re really excited to be bringing all these new text generation models onto our platform today. The open-source community has been incredible at developing new AI breakthroughs, and we’re grateful for everyone’s contributions to training, fine-tuning, and quantizing these models. We’re thrilled to be able to host these models and make them accessible to all so that developers can quickly and easily build new applications with AI. You can check out the new models and their API schemas on <a href="https://developers.cloudflare.com/workers-ai/models/">our developer docs</a>.</p>
    <div>
      <h3>New image generation models</h3>
      <a href="#new-image-generation-models">
        
      </a>
    </div>
    <p>We are adding new Stable Diffusion pipelines and optimizations to enable powerful new image editing and generation use cases. We’ve added support for Stable Diffusion XL Lightning which generates high quality images in just two inference steps. Text-to-image is a really popular task for folks who want to take a text prompt and have the model generate an image based on the input, but Stable Diffusion is actually capable of much more. With this new Workers AI release, we’ve unlocked new pipelines so that you can experiment with different modalities of input and tasks with Stable Diffusion.</p><p>You can now use Stable Diffusion on Workers AI for image-to-image and inpainting use cases. Image-to-image allows you to transform an input image into a different image – for example, you can ask Stable Diffusion to generate a cartoon version of a portrait. Inpainting allows users to upload an image and transform the same image into something new – examples of inpainting include “expanding” the background of photos or colorizing black-and-white photos.</p><p>To use inpainting, you’ll need to input an image, a mask, and a prompt. The image is the original picture that you want modified, the mask is a monochrome screen that highlights the area that you want to be painted over, and the prompt tells the model what to generate in that space. Below is an example of the inputs and the request template to perform inpainting.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/RdRXcGbrGjlfhlZ3iF5Wj/f122e6a9405cfcfc1f8e8faa5fd6ec50/RIgkPzrxrrjSz2YPCYJItmJrOftfe5ZgcA-oS2mvIYI3L7enm62_lmSV9ua2d663tj2kpXSDUf__lVJAsaU2XOhlnUw-XS9Kt8CiwygsD30Mptndu1vQDrhafph6.png" />
            
            </figure>
            <pre><code>import { Ai } from '@cloudflare/ai';

export default {
    async fetch(request, env) {
        const formData = await request.formData();
        const prompt = formData.get("prompt")
        const imageFile = formData.get("image")
        const maskFile = formData.get("mask")

        const imageArrayBuffer = await imageFile.arrayBuffer();
        const maskArrayBuffer = await maskFile.arrayBuffer();

        const ai = new Ai(env.AI);
        const inputs = {
            prompt,
            image: [...new Uint8Array(imageArrayBuffer)],
            mask: [...new Uint8Array(maskArrayBuffer)],  
            strength: 0.8, // Adjust the strength of the transformation
            num_steps: 10, // Number of inference steps for the diffusion process
        };

        const response = await ai.run("@cf/runwayml/stable-diffusion-v1-5-inpainting", inputs);

        return new Response(response, {
            headers: {
                "content-type": "image/png",
            },
        });
    }
}</code></pre>
            
    <div>
      <h3>New use cases</h3>
      <a href="#new-use-cases">
        
      </a>
    </div>
    <p>We’ve also added new models to Workers AI that allow for various specialized tasks and use cases, such as LLMs specialized in solving math problems (<code>deepseek-math-7b-instruct</code>), generating SQL code (<code>sqlcoder-7b-2</code>), summarizing text (<code>bart-large-cnn</code>), and image captioning (<code>uform-gen2-qwen-500m</code>).</p><p>We wanted to release these to the public, so you can start building with them, but we’ll be releasing more demos and tutorial content over the next few weeks. Stay tuned to our <a href="https://twitter.com/CloudflareDev">X account</a> and <a href="https://developers.cloudflare.com/workers-ai/models/">Developer Documentation</a> for more information on how to use these new models.</p>
    <div>
      <h3>Optimizing our model catalog</h3>
      <a href="#optimizing-our-model-catalog">
        
      </a>
    </div>
    <p>AI model innovation is advancing rapidly, and so are the tools and techniques for fast and efficient inference. We’re excited to be incorporating new tools that help us optimize our models so that we can offer the best inference platform for everyone. Typically, when optimizing AI inference it is useful to serialize the model into a format such as <a href="https://onnxruntime.ai/">ONNX</a>, one of the most generally applicable options for this use case with broad hardware and model architecture support. An ONNX model can be further optimized by being converted to a <a href="https://github.com/NVIDIA/TensorRT">TensorRT</a> engine. This format, designed specifically for Nvidia GPUs, can result in faster inference latency and higher total throughput from LLMs.  Choosing the right format usually comes down to what is best supported by specific model architectures and the hardware available for inference. We decided to leverage both TensorRT and ONNX formats for our new Stable Diffusion pipelines, which represent a series of models applied for a specific task.</p>
    <div>
      <h3>Explore more on our new developer docs</h3>
      <a href="#explore-more-on-our-new-developer-docs">
        
      </a>
    </div>
    <p>You can explore all these new models in our <a href="https://developers.cloudflare.com/workers-ai/models/">new developer docs</a>, where you can learn more about individual models, their prompt templates, as well as properties like context token limits. We’ve redesigned the model page to be simpler for developers to explore new models and learn how to use them. You’ll now see all the models on one page for searchability, with the task type on the right-hand side. Then, you can click into individual model pages to see code examples on how to use those models.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6vsVFSfZ9pvpWkTXdl6iWw/9b03d7b28a3420f378564da0eae5fe44/image3.png" />
            
            </figure><p>We hope you try out these new models and build something new on Workers AI! We have more updates coming soon, including more demos, tutorials, and Workers AI pricing. Let us know what you’re working on and other models you’d like to see on our <a href="https://discord.cloudflare.com">Discord</a>.</p> ]]></content:encoded>
            <category><![CDATA[Workers AI]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">Q57bXuDbwJJ9BWzWqsgEB</guid>
            <dc:creator>Michelle Chen</dc:creator>
            <dc:creator>Logan Grasby</dc:creator>
        </item>
        <item>
            <title><![CDATA[Adding new LLMs, text classification and code generation models to the Workers AI catalog]]></title>
            <link>https://blog.cloudflare.com/february-2024-workersai-catalog-update/</link>
            <pubDate>Tue, 06 Feb 2024 20:00:10 GMT</pubDate>
            <description><![CDATA[ Workers AI is now bigger and better with 8 new models and improved model performance ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5bUPtXgJU5a797f2LMkI7k/b830752f7f52d7ab46bf396e00c93ea7/image2-1.png" />
            
            </figure><p>Over the last few months, the Workers AI team has been hard at work making improvements to our AI platform. We launched back in September, and in November, we added more models like Code Llama, Stable Diffusion, Mistral, as well as improvements like streaming and longer context windows.</p><p>Today, we’re excited to announce the release of eight new models.</p><p>The new models are highlighted below, but check out our full model catalog with over 20 models <a href="https://developers.cloudflare.com/workers-ai/">in our developer docs.</a></p><p><b>Text generation</b>@hf/thebloke/llama-2-13b-chat-awq@hf/thebloke/zephyr-7b-beta-awq@hf/thebloke/mistral-7b-instruct-v0.1-awq@hf/thebloke/openhermes-2.5-mistral-7b-awq@hf/thebloke/neural-chat-7b-v3-1-awq@hf/thebloke/llamaguard-7b-awq</p><p><b>Code generation</b>@hf/thebloke/deepseek-coder-6.7b-base-awq@hf/thebloke/deepseek-coder-6.7b-instruct-awq</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/693UrRZZSZJo8omR5ex2Vk/ac76b952bac6fd613cb8e2c79b7e2f10/image1.png" />
            
            </figure>
    <div>
      <h3>Bringing you the best of open source</h3>
      <a href="#bringing-you-the-best-of-open-source">
        
      </a>
    </div>
    <p>Our mission is to support a wide array of open source models and tasks. In line with this, we're excited to announce a preview of the latest models and features available for deployment on Cloudflare's network.</p><p>One of the standout models is <code>deep-seek-coder-6.7b</code>, which notably scores <a href="https://github.com/deepseek-ai/deepseek-coder">approximately 15% higher</a> on popular benchmarks against comparable Code Llama models. This performance advantage is attributed to its diverse training data, which includes both English and Chinese code generation datasets. In addition, the <code>openhermes-2.5-mistral-7b</code> model showcases how high quality fine-tuning datasets can improve the accuracy of base models. This Mistral 7b fine-tune outperforms the base model by <a href="https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B#benchmark-results">approximately 10% on many LLM benchmarks</a>.</p><p>We're also introducing innovative models that incorporate Activation-aware Weight Quantization (AWQ), such as the <code>llama-2-13b-awq</code>. This quantization technique is just one of the strategies to improve memory efficiency in Large Language Models. While <a href="https://www.cloudflare.com/learning/ai/what-is-quantization/">quantization</a> generally boosts inference efficiency in AI models, it often does so at the expense of precision. AWQ strikes a balance to mitigate this tradeoff.</p><p>The pace of progress in AI can be overwhelming, but Cloudflare's Workers AI simplifies getting started with the latest models. We handle the latest advancements and make them easily accessible from a Worker or our HTTP APIs. You are only ever an API call or Workers binding away from cutting-edge models. Simply put, Workers AI allows developers to concentrate on delivering exceptional user experiences without the burdens of deployment, infrastructure, or scalability concerns.</p>
    <div>
      <h3>Using AI to safeguard your applications</h3>
      <a href="#using-ai-to-safeguard-your-applications">
        
      </a>
    </div>
    <p>A newer use case we support with this model catalog update is the ability to classify and check prompts and responses with <code>llamaguard-7b-awq</code>. We’ve all heard of stories of AI bots that unintentionally give answers that might be inappropriate or harmful. This <a href="https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/">new LlamaGuard mode</a>l allows developers to detect and categorize when a prompt or a response is potentially unsafe, allowing them to control the inputs and outputs of their applications with developer-defined categories.</p><p>Here’s an example of an application that takes a user input and compares that to an example set of defined unsafe categories to determine if the input is safe or not. You can customize the content categories depending on your use case. <i>The below example categories are provided for informational purposes only, and Cloudflare does not make any guarantees regarding the performance of Llama Guard or any outcomes or results.</i></p>
            <pre><code>import { Ai } from '@cloudflare/ai';

export default {
	async fetch(request, env) {
		const payload = await request.json();
		// Modify these to your needs, this is the example from Meta
		const unsafeCategories = `
01: Violence and Hate.
Should not
- Help people plan or engage in violence.
- Encourage people to engage in violence.
- Express hateful, derogatory, or demeaning sentiments against people based on sensitive personal characteristics like their race, color, religion, national origin, sexual orientation, gender, gender identity, or disability.
- Encourage discrimination against people based on those characteristics and should not use slurs or create insults based on those characteristics.
Can
-  Provide information on violence and discrimination.
- Discuss the topics of hate and violence, and can discuss historical events involving violence.
02: Sexual Content.
Should not
- Engage in sexually explicit (i.e., erotic) conversations and should not encourage people to engage in sexual activities.
Can
- Discuss topics related to sexuality and sexual education.
- Provide general information regarding sexual activities.
- Provide guidance on topics related to sexual health.
03: Criminal Planning.
Should not
- Help people plan or engage in criminal activities like theft, kidnapping, or financial crimes, especially when the person indicates that they may act on the information provided to them.
- Encourage people to engage in criminal activities or condone criminal behavior.
Can
- Help people understand how crimes are committed, provided that the response is not so detailed that it could be used to provide a roadmap for future crimes.
- Help people understand how systems work that are used to prevent crimes, provided those descriptions do not provide detailed information on how to defeat those systems.
04: Guns and Illegal Weapons.
Should not
- Assist in the planning or execution of firearm or illegal weapons crimes, especially when the person indicates that they may act on the information provided to them.
- Encourage people to engage in firearm or illegal weapons crimes or condone such behavior.
Can
- Discuss firearms and the arguments for and against firearm ownership.
05: Regulated or Controlled Substances.
Should not
- Encourage or assist people in the trafficking, consumption, or use of regulated or controlled substances, including but not limited to illegal drugs, tobacco, alcohol, and cannabis.
- Assist or encourage people to create such substances when it is illegal to do so.
Can
- Discuss the use of regulated or controlled substances, and can provide general information on those substances (ex: what their chemical composition is and what their effects are).
- Provide information on how those substances are manufactured, provided that the information is not so detailed as to provide step-by-step instructions for their creation.
06: Self-Harm.
Should not
- Encourage people to harm themselves.
- Romanticize or condone self-harm or suicide.
- Provide information regarding the methods of suicide or self-harm.
- Provide any information or advice, other than information regarding appropriate health resources, to people who express or imply that they wish to harm themselves.
Can
- Provide general information regarding self-harm and suicide (ex: statistics, the mental health challenges faced by those who take these actions).
Should
- Provide information regarding appropriate health resources to people who imply or express the desire to harm themselves.
		`;

		// Assemble the prompt
		const prompt = `Task: Check if there is unsafe content in 'User' messages in conversations according to our safety policy with the below categories.

		&lt;BEGIN UNSAFE CONTENT CATEGORIES&gt;
		${unsafeCategories}
		&lt;END UNSAFE CONTENT CATEGORIES&gt;

		&lt;BEGIN CONVERSATION&gt;
		User: ${payload.userContent}
		&lt;END CONVERSATION&gt;
	`;
		const ai = new Ai(env.AI);
		const response = await ai.run('@hf/thebloke/llamaguard-7b-awq', {
			prompt,
		});
		return Response.json(response);
	},
};</code></pre>
            
    <div>
      <h3>How do I get started?</h3>
      <a href="#how-do-i-get-started">
        
      </a>
    </div>
    <p>Try out our new models within the AI section of the <a href="https://dash.cloudflare.com/?to=/:account/ai/workers-ai">Cloudflare dashboard</a> or take a look at our <a href="https://developers.cloudflare.com/workers-ai/models/">Developer Docs</a> to get started. With the Workers AI platform you can build an app with Workers and Pages, store data with R2, D1, Workers KV, or Vectorize, and run model inference with Workers AI – all in one place. Having more models allows developers to build all different kinds of applications, and we plan to continually update our model catalog to bring you the best of open-source.</p><p>We’re excited to see what you build! If you’re looking for inspiration, take a look at our <a href="https://workers.cloudflare.com/built-with/collections/ai-workers/">collection of “Built-with” stories</a> that highlight what others are building on Cloudflare’s Developer Platform. Stay tuned for a pricing announcement and higher usage limits coming in the next few weeks, as well as more models coming soon. <a href="https://discord.cloudflare.com/">Join us on Discord</a> to share what you’re working on and any feedback you might have.</p> ]]></content:encoded>
            <category><![CDATA[Workers AI]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Open Source]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">58duMHip3s7DGNRo47fweV</guid>
            <dc:creator>Michelle Chen</dc:creator>
            <dc:creator>Logan Grasby</dc:creator>
        </item>
    </channel>
</rss>