View on Github
Why Streaming?
For certain ML models, generations can take a long time. Especially with LLMs, a long output could take 10 - 20 seconds to generate. However, because LLMs generate tokens in sequence, useful output can be made available to users sooner. To support this, in Truss, we support streaming output. In this example, we build a Truss that streams the output of the Falcon-7B model.Set up the imports and key constants
In this example, we use the HuggingFace transformers library to build a text generation model.model/model.py
model/model.py
Define the load function
In theload
function of the Truss, we implement logic
involved in downloading the model and loading it into memory.
model/model.py
model/model.py
Define the predict function
In thepredict
function of the Truss, we implement the actual
inference logic. The two main steps are:
- Tokenize the input
- Call the model’s
generate
function, ensuring that we pass aTextIteratorStreamer
. This is what gives us streaming output, and and also do this in a Thread, so that it does not block the main invocation. - Return a generator that iterates over the
TextIteratorStreamer
object
model/model.py
model/model.py
streamer
object
that we created previously.
model/model.py
model/model.py
streamer
,
which produces output and yields it until the generation is complete.
We define this inner
function to create our generator.
model/model.py
Setting up the config.yaml
Running Falcon 7B requires torch, transformers, and a few other related libraries.config.yaml
Configure resources for Falcon
Note that we need an A10G to run this model.config.yaml