Input to chat model class.

interface ChatOllamaInput {
    baseUrl?: string;
    checkModelExists?: boolean;
    embeddingOnly?: boolean;
    f16Kv?: boolean;
    format?: string;
    frequencyPenalty?: number;
    keepAlive?: string | number;
    logitsAll?: boolean;
    lowVram?: boolean;
    mainGpu?: number;
    mirostat?: number;
    mirostatEta?: number;
    mirostatTau?: number;
    model?: string;
    numBatch?: number;
    numCtx?: number;
    numGpu?: number;
    numKeep?: number;
    numPredict?: number;
    numThread?: number;
    numa?: boolean;
    penalizeNewline?: boolean;
    presencePenalty?: number;
    repeatLastN?: number;
    repeatPenalty?: number;
    seed?: number;
    streaming?: boolean;
    temperature?: number;
    tfsZ?: number;
    topK?: number;
    topP?: number;
    typicalP?: number;
    useMlock?: boolean;
    useMmap?: boolean;
    vocabOnly?: boolean;
}

Hierarchy (view full)

Implemented by

Properties

baseUrl?: string

The host URL of the Ollama server.

checkModelExists?: boolean

Whether or not to check the model exists on the local machine before invoking it. If set to true, the model will be pulled if it does not exist.

false
embeddingOnly?: boolean
f16Kv?: boolean
format?: string
frequencyPenalty?: number
keepAlive?: string | number
"5m"
logitsAll?: boolean
lowVram?: boolean
mainGpu?: number
mirostat?: number
mirostatEta?: number
mirostatTau?: number
model?: string

The model to invoke. If the model does not exist, it will be pulled.

"llama3"
numBatch?: number
numCtx?: number
numGpu?: number
numKeep?: number
numPredict?: number
numThread?: number
numa?: boolean
penalizeNewline?: boolean
presencePenalty?: number
repeatLastN?: number
repeatPenalty?: number
seed?: number
streaming?: boolean
temperature?: number
tfsZ?: number
topK?: number
topP?: number
typicalP?: number
useMlock?: boolean
useMmap?: boolean
vocabOnly?: boolean