| external help file | PSOpenAI-help.xml |
|---|---|
| Module Name | PSOpenAI |
| online version | https://github.com/mkht/PSOpenAI/blob/main/Docs/Request-ChatCompletion.md |
| schema | 2.0.0 |
Creates a completion for the chat message.
Request-ChatCompletion
[-Message] <String>
[-Role <String>]
[-Name <String>]
[-Model <String>]
[-SystemMessage <String[]>]
[-DeveloperMessage <String[]>]
[-Modalities <String[]>]
[-Voice <String>]
[-InputAudio <String>]
[-InputAudioFormat <String>]
[-AudioOutFile <String>]
[-OutputAudioFormat <String>]
[-Images <String[]>]
[-ImageDetail <String>]
[-Tools <IDictionary[]>]
[-ToolChoice <Object>]
[-ParallelToolCalls]
[-InvokeTools <String>]
[-WebSearchContextSize <String>]
[-WebSearchUserLocationCity <String>]
[-WebSearchUserLocationCountry <String>]
[-WebSearchUserLocationRegion <String>]
[-WebSearchUserLocationTimeZone <String>]
[-Prediction <String>]
[-Temperature <Double>]
[-TopP <Double>]
[-NumberOfAnswers <UInt16>]
[-Stream]
[-Store]
[-Verbosity <String>]
[-ReasoningEffort <String>]
[-MetaData <IDictionary>]
[-StopSequence <String[]>]
[-MaxTokens <Int32>]
[-MaxCompletionTokens <Int32>]
[-PresencePenalty <Double>]
[-FrequencyPenalty <Double>]
[-LogitBias <IDictionary>]
[-LogProbs <Boolean>]
[-TopLogProbs <UInt16>]
[-ResponseFormat <Object>]
[-JsonSchema <String>]
[-Seed <Int64>]
[-ServiceTier <String>]
[-PromptCacheKey <String>]
[-PromptCacheRetention <String>]
[-SafetyIdentifier <String>]
[-User <String>]
[-AsBatch]
[-CustomBatchId <String>]
[-TimeoutSec <Int32>]
[-MaxRetryCount <Int32>]
[-ApiBase <Uri>]
[-ApiKey <Object>]
[-Organization <String>]
[-History <Object[]>]
[<CommonParameters>]
Creates a completion for the chat message.
https://developers.openai.com/api/reference/chat-completions/overview/
PS C:\> Request-ChatCompletion -Message "Who are you?" | select AnswerI am an AI language model created by OpenAI, designed to assist with ...
PS> $FirstQA = Request-ChatCompletion -Message "What is the population of the United States?"
PS> $FirstQA.Answer
As of September 2021, the estimated population of the United States is around 331.4 million people.
PS\> $SecondQA = $FirstQA | Request-ChatCompletion -Message "Translate the previous answer into French."
PS\> $SecondQA.Answer
En septembre 2021, la population estimée des États-Unis est d'environ 331,4 millions de personnes.PS C:\> Request-ChatCompletion 'Please describe ChatGPT in 100 charactors.' -Stream | Write-Host -NoNewlinePS C:\> $PingFunction = New-ChatCompletionFunction -Command 'Test-Connection' -IncludeParameters ('TargetName','Count')
PS C:\> $Message = 'Ping the Google Public DNS address three times and briefly report the results.'
PS C:\> $GPTPingAnswer = Request-ChatCompletion -Message $Message -Model gpt-4o -Tools $PingFunction -InvokeTools Auto
PS C:\> $GPTPingAnswer | select AnswerPS C:\> Request-ChatCompletion -Message $Message -Model gpt-4o -Images "C:\image.png"PS C:\> Request-ChatCompletion -Modalities text, audio -InputAudio 'C:\hello.mp3' -AudioOutFile 'C:\response.mp3' -Model gpt-4o-audio-previewThe messages to generate chat completions.
Type: String
Aliases: Text
Required: False
Position: 1
Accept pipeline input: True (ByValue)The role of the messages author. One of user, system, developer or function.
The default is user.
Type: String
Required: False
Position: NamedThe name of the author of this message.
This is an optional field, and may contain a-z, A-Z, 0-9, hyphens, and underscores, with a maximum length of 64 characters.
Type: String
Required: False
Position: NamedThe name of model to use.
The default value is gpt-3.5-turbo.
Type: String
Required: False
Position: Named
Accept pipeline input: True (ByPropertyName)
Default value: gpt-3.5-turboAn optional text to set the behavior of the assistant.
Type: String[]
Aliases: system, RolePrompt
Required: False
Position: NamedDeveloper-provided instructions that the model should follow, regardless of messages sent by the user. With o1 models and newer, developer messages replace the previous system messages.
Type: String[]
Required: False
Position: NamedOutput types that you would like the model to generate for this request.
Some models can generate both text and audio. To request that responses, you can specify: ("text", "audio")
Type: String[]
Required: False
Position: NamedThe voice the model uses to respond.
Type: String
Required: False
Position: NamedThe path of the audio file to passing the model. Supported formats are wav and mp3.
Type: String
Aliases: input_audio
Required: False
Position: NamedSpecifies the format of the input audio file. If not specified, the format is automatically determined from the file extension.
Type: String
Required: False
Position: NamedSpecifies where audio response from the model will be saved. If the model does not return a audio response, nothing is saved.
Type: String
Required: False
Position: NamedSpecifies the format of the output audio file. The default value is mp3.
Type: String
Required: False
Position: NamedAn array of images to passing the model. You can specifies local image file or remote url.
Type: String[]
Required: False
Position: NamedControls how the model processes the image and generates its textual understanding. You can select from Low or High.
See more details : https://developers.openai.com/api/docs/guides/images-vision/
Type: String
Required: False
Position: Named
Default value: AutoA list of tools the model may call. Use this to provide a list of functions the model may generate JSON inputs for.
https://github.com/mkht/PSOpenAI/blob/main/Guides/How_to_call_functions_with_ChatGPT.ipynb
Type: System.Collections.IDictionary[]
Required: False
Position: NamedControls how the model responds to function calls.
nonemeans the model does not call a function, and responds to the end-user.automeans the model can pick between an end-user or calling a function.
Specifying a particular function via@{type = "function"; function = @{name = "my_function"}}forces the model to call that function.
Type: Object
Aliases: tool_choice
Required: False
Position: NamedWhether to enable parallel function calling during tool use. The default is true (enabled)
Type: SwitchParameter
Aliases: parallel_tool_calls
Required: False
Position: Named
Default value: TrueSelects the action to be taken when the GPT model requests a function call.
None: The requested function is not executed. This is the default.Auto: Automatically executes the requested function.Confirm: Displays a confirmation to the user before executing the requested function.
Type: String
Required: False
Position: NamedHigh level guidance for the amount of context window space to use for the web search. One of low, medium, or high
Type: String
Required: False
Position: NamedApproximate location parameters for the web search.
Type: String
Required: False
Position: NamedApproximate location parameters for the web search.
Type: String
Required: False
Position: NamedApproximate location parameters for the web search.
Type: String
Required: False
Position: NamedApproximate location parameters for the web search.
Type: String
Required: False
Position: NamedStatic predicted output content, such as the content of a text file that is being regenerated.
Type: String
Required: False
Position: NamedWhat sampling temperature to use, between 0 and 2.
Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
Type: Double
Required: False
Position: NamedAn alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
So 0.1 means only the tokens comprising the top 10% probability mass are considered.
Type: Double
Aliases: top_p
Required: False
Position: NamedHow many chat completion choices to generate for each input message.
The default value is 1.
Type: UInt16
Aliases: n
Required: False
Position: Named
Default value: 1If set, partial message deltas will be sent, like in ChatGPT.
Type: SwitchParameter
Required: False
Position: Named
Default value: FalseWhether or not to store the output of this chat completion request for use in model distillation or evals.
Type: SwitchParameter
Required: False
Position: Named
Default value: FalseControls the verbosity level of the response.
Valid values are low, medium, or high.
Type: String
Required: False
Position: Named
Default value: mediumConstrains effort on reasoning for reasoning models. Supported values are none, minimal, low, medium, high, and xhigh.
Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.
Type: String
Aliases: reasoning_effort
Required: False
Position: NamedDeveloper-defined tags and values used for filtering completions in the dashboard.
Type: IDictionary
Required: False
Position: NamedUp to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
Type: String[]
Aliases: stop
Required: False
Position: NamedThis value is now deprecated in favor of MaxCompletionTokens.
Type: Int32
Aliases: max_tokens
Required: False
Position: NamedAn upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.
Type: Int32
Aliases: max_completion_tokens
Required: False
Position: NamedNumber between -2.0 and 2.0.
Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
Type: Double
Aliases: presence_penalty
Required: False
Position: NamedNumber between -2.0 and 2.0.
Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
Type: Double
Aliases: frequency_penalty
Required: False
Position: NamedModify the likelihood of specified tokens appearing in the completion.
Accepts a maps of tokens to an associated bias value from -100 to 100. You can use ConvertTo-Token to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
As an example, you can pass like so: @{23182 = 20; 88847 = -100}
ID 23182 maps to "apple" and ID 88847 maps to "banana". Thus, this example increases the likelihood of the word "apple" being included in the response from the AI and greatly reduces the likelihood of the word "banana" being included.
Type: IDictionary
Aliases: logit_bias
Required: False
Position: NamedWhether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message.
Type: Boolean
Required: False
Position: NamedAn integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to true if this parameter is used.
Type: UInt16
Aliases: top_logprobs
Required: False
Position: NamedSpecifies the format that the model must output.
textis default.json_objectenables JSON mode, which ensures the message the model generates is valid JSON.json_schemaenables Structured Outputs which ensures the model will match your supplied JSON schema.raw_responsereturns raw response content from API.
Type: Object
Aliases: response_format
Required: False
Position: NamedSpecifies an object or data structure to represent the JSON Schema that the model should be constrained to follow.
Required if json_schema is specified for -ResponseFormat. Otherwise, it is ignored.
Type: String
Required: False
Position: NamedIf specified, the system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.
Type: Int64
Required: False
Position: NamedSpecifies the latency tier to use for processing the request. This parameter is relevant for customers subscribed to the scale tier service.
Type: String
Aliases: service_tier
Required: False
Position: NamedUsed by OpenAI to cache responses for similar requests to optimize your cache hit rates.
Type: String
Aliases: prompt_cache_key
Required: False
Position: NamedThe retention policy for the prompt cache. Set to 24h to enable extended prompt caching, which keeps cached prefixes active for longer, up to a maximum of 24 hours.
Type: String
Aliases: prompt_cache_retention
Required: False
Position: NamedA stable identifier used to help detect users of your application that may be violating OpenAI's usage policies. The IDs should be a string that uniquely identifies each user.
Type: String
Aliases: safety_identifier
Required: False
Position: Named(deprecated) This field is being replaced by SafetyIdentifier and PromptCacheKey.
Type: String
Required: False
Position: NamedIf this is specified, this cmdlet returns an object for Batch input
It does not perform an API request to OpenAI. It is useful with Start-Batch cmdlet.
Type: SwitchParameter
Required: False
Position: Named
Default value: FalseA unique id that will be used to match outputs to inputs of batch. Must be unique for each request in a batch.
This parameter is valid only when the -AsBatch swicth is used. Otherwise, it is simply ignored.
Type: String
Required: False
Position: NamedSpecifies how long the request can be pending before it times out.
The default value is 0 (infinite).
Type: Int32
Required: False
Position: Named
Default value: 0Number between 0 and 100.
Specifies the maximum number of retries if the request fails.
The default value is 0 (No retry).
Note : Retries will only be performed if the request fails with a 429 (Rate limit reached) or 5xx (Server side errors) error. Other errors (e.g., authentication failure) will not be performed.
Type: Int32
Required: False
Position: Named
Default value: 0Specifies an API endpoint URL such like: https://your-api-endpoint.test/v1
If not specified, it will use https://api.openai.com/v1
Type: System.Uri
Required: False
Position: Named
Default value: https://api.openai.com/v1Specifies API key for authentication.
The type of data should [string] or [securestring].
If not specified, it will try to use $global:OPENAI_API_KEY or $env:OPENAI_API_KEY
Type: Object
Required: False
Position: NamedSpecifies Organization ID which used for an API request.
If not specified, it will try to use $global:OPENAI_ORGANIZATION or $env:OPENAI_ORGANIZATION
Type: string
Aliases: OrgId
Required: False
Position: NamedAn object for keeping the conversation history.
Type: Object[]
Required: False
Position: Named
Accept pipeline input: True (ByPropertyName)https://developers.openai.com/api/reference/chat-completions/overview/
https://developers.openai.com/api/reference/resources/chat/subresources/completions/methods/create/
