OpenVINO
OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference. OpenVINO™ Runtime can enable running the same model optimized across various hardware devices. Accelerate your deep learning performance across use cases like: language + LLMs, computer vision, automatic speech recognition, and more.
OpenVINO models can be run locally through the HuggingFacePipeline
class. To deploy a model with OpenVINO, you can specify the backend="openvino"
parameter to trigger OpenVINO as backend inference framework.
To use, you should have the optimum-intel
with OpenVINO Accelerator python package installed.
%pip install --upgrade-strategy eager "optimum[openvino,nncf]" langchain-huggingface --quiet
Model Loading
Models can be loaded by specifying the model parameters using the from_model_id
method.
If you have an Intel GPU, you can specify model_kwargs={"device": "GPU"}
to run inference on it.
from langchain_huggingface import HuggingFacePipeline
ov_config = {"PERFORMANCE_HINT": "LATENCY", "NUM_STREAMS": "1", "CACHE_DIR": ""}
ov_llm = HuggingFacePipeline.from_model_id(
model_id="gpt2",
task="text-generation",
backend="openvino",
model_kwargs={"device": "CPU", "ov_config": ov_config},
pipeline_kwargs={"max_new_tokens": 10},
)
They can also be loaded by passing in an existing optimum-intel
pipeline directly
from optimum.intel.openvino import OVModelForCausalLM
from transformers import AutoTokenizer, pipeline
model_id = "gpt2"
device = "CPU"
tokenizer = AutoTokenizer.from_pretrained(model_id)
ov_model = OVModelForCausalLM.from_pretrained(
model_id, export=True, device=device, ov_config=ov_config
)
ov_pipe = pipeline(
"text-generation", model=ov_model, tokenizer=tokenizer, max_new_tokens=10
)
ov_llm = HuggingFacePipeline(pipeline=ov_pipe)
Create Chain
With the model loaded into memory, you can compose it with a prompt to form a chain.
from langchain_core.prompts import PromptTemplate
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate.from_template(template)
chain = prompt | ov_llm
question = "What is electroencephalography?"
print(chain.invoke({"question": question}))
To get response without prompt, you can bind skip_prompt=True
with LLM.
chain = prompt | ov_llm.bind(skip_prompt=True)
question = "What is electroencephalography?"
print(chain.invoke({"question": question}))