Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Important
- Foundry Local is available in preview. Public preview releases provide early access to features that are in active deployment.
- Features, approaches, and processes can change or have limited capabilities, before General Availability (GA).
This article shows you how to build a translation app by using the Foundry Local SDK and LangChain. Use a local model to translate text between languages.
Prerequisites
Before starting this tutorial, you need:
- Foundry Local installed on your computer. Read the Get started with Foundry Local guide for installation instructions.
- Python 3.10 or later installed on your computer. You can download Python from the official website.
Install Python packages
You need to install the following Python packages:
pip install langchain[openai]
pip install foundry-local-sdk
Tip
We recommend using a virtual environment to avoid package conflicts. You can create a virtual environment using either venv or conda.
Create a translation application
Create a new Python file named translation_app.py in your favorite IDE and add the following code:
import os
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from foundry_local import FoundryLocalManager
# By using an alias, the most suitable model will be downloaded
# to your end-user's device.
# TIP: You can find a list of available models by running the
# following command: `foundry model list`.
alias = "qwen2.5-0.5b"
# Create a FoundryLocalManager instance. This will start the Foundry
# Local service if it is not already running and load the specified model.
manager = FoundryLocalManager(alias)
# Configure ChatOpenAI to use your locally-running model
llm = ChatOpenAI(
model=manager.get_model_info(alias).id,
base_url=manager.endpoint,
api_key=manager.api_key,
temperature=0.6,
streaming=False
)
# Create a translation prompt template
prompt = ChatPromptTemplate.from_messages([
(
"system",
"You are a helpful assistant that translates {input_language} to {output_language}."
),
("human", "{input}")
])
# Build a simple chain by connecting the prompt to the language model
chain = prompt | llm
input = "I love to code."
print(f"Translating '{input}' to French...")
# Run the chain with your inputs
ai_msg = chain.invoke({
"input_language": "English",
"output_language": "French",
"input": input
})
# print the result content
print(f"Response: {ai_msg.content}")
References
- Reference: Foundry Local SDK reference
- Reference: Get started with Foundry Local
Note
One of key benefits of Foundry Local is that it automatically selects the most suitable model variant for the user's hardware. For example, if the user has a GPU, it downloads the GPU version of the model. If the user has an NPU (Neural Processing Unit), it downloads the NPU version. If the user doesn't have either a GPU or NPU, it downloads the CPU version of the model.
Run the application
To run the application, open a terminal and navigate to the directory where you saved the translation_app.py file. Then, run the following command:
python translation_app.py
You're done when you see a Response: line with the translated text.
You should see output similar to:
Translating 'I love to code.' to French...
Response: <translated text>
Prerequisites
Before starting this tutorial, you need:
- Node.js 20 or later installed on your computer. You can download Node.js from the official website.
Set up project
Use Foundry Local in your JavaScript project by following these Windows-specific or Cross-Platform (macOS/Linux/Windows) instructions:
- Create a new JavaScript project:
mkdir app-name cd app-name npm init -y npm pkg set type=module - Install the Foundry Local SDK package:
npm install --winml foundry-local-sdk npm install openai
Install LangChain packages
You also need to install the following Node.js packages:
npm install @langchain/openai @langchain/core
Create a translation application
Create a new JavaScript file named translation_app.js in your favorite IDE and add the following code:
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { FoundryLocalManager } from 'foundry-local-sdk';
// Initialize the Foundry Local SDK
console.log('Initializing Foundry Local SDK...');
const endpointUrl = 'http://localhost:5764';
const manager = FoundryLocalManager.create({
appName: 'foundry_local_samples',
logLevel: 'info',
webServiceUrls: endpointUrl
});
console.log('✓ SDK initialized successfully');
// Get the model object
const modelAlias = 'qwen2.5-0.5b'; // Using an available model from the list above
const model = await manager.catalog.getModel(modelAlias);
model.selectVariant('qwen2.5-0.5b-instruct-generic-cpu:4');
// Download the model
console.log(`\nDownloading model ${modelAlias}...`);
await model.download();
console.log('✓ Model downloaded');
// Load the model
console.log(`\nLoading model ${modelAlias}...`);
await model.load();
console.log('✓ Model loaded');
// Start the web service
console.log('\nStarting web service...');
manager.startWebService();
console.log('✓ Web service started');
// Configure ChatOpenAI to use your locally-running model
const llm = new ChatOpenAI({
model: model.id,
configuration: {
baseURL: endpointUrl + '/v1',
apiKey: 'notneeded'
},
temperature: 0.6,
streaming: false
});
// Create a translation prompt template
const prompt = ChatPromptTemplate.fromMessages([
{
role: "system",
content: "You are a helpful assistant that translates {input_language} to {output_language}."
},
{
role: "user",
content: "{input}"
}
]);
// Build a simple chain by connecting the prompt to the language model
const chain = prompt.pipe(llm);
const input = "I love to code.";
console.log(`Translating '${input}' to French...`);
// Run the chain with your inputs
await chain.invoke({
input_language: "English",
output_language: "French",
input: input
}).then(aiMsg => {
// Print the result content
console.log(`Response: ${aiMsg.content}`);
}).catch(err => {
console.error("Error:", err);
});
// Tidy up
console.log('Unloading model and stopping web service...');
await model.unload();
manager.stopWebService();
console.log(`✓ Model unloaded and web service stopped`);
Run the application
To run the application, open a terminal and navigate to the directory where you saved the translation_app.js file. Then, run the following command:
node translation_app.js
You're done when you see a Response: line with the translated text.
You should see output similar to:
Translating 'I love to code.' to French...
Response: J'aime le coder
Troubleshooting
- If you see a service connection error, restart the Foundry Local service and try again.
- The first run can take longer because Foundry Local might download the model.
- If Node.js fails with an import or top-level await error, confirm your project is configured for ES modules.
Related content
- Explore the LangChain documentation for advanced features.
- Compile Hugging Face models to run on Foundry Local