OpenAI introduced a new service tier for developers on Thursday via its application programming interface (API). Dubbed Flex processing, it reduces the AI usage costs by half for developers, compared to standard pricing. However, the lowered prices come with the consequence of slower response times and occasional resource unavailability. The new API feature is currently available in beta for select reasoning-focused large language models (LLMs). The San Francisco-based AI firm said this service tier can be useful for non-production and non-priority tasks.
OpenAI Adds New Service Tier in API
In its support page, the AI firm detailed this service tier. The Flex processing is currently available in beta for Chat Completions and Responses APIs, and works with the o3 and o4-mini AI models. Developers can set the service tier parameter to Flex in API request to activate the new mode.
One downside of the cheaper API pricing is that the processing time will be significantly higher. OpenAI says developers opting for Flex processing should expect slower response times and occasional resource unavailability. Additionally, users may also face API request timeout issues, in case the prompt is lengthy or the request is complex. As per the AI firm, this mode can be helpful for non-production or low-priority tasks such as model evaluations, data enrichment, or asynchronous workloads.
Notably, OpenAI highlights that developers can avoid timeout errors by increasing the default timeout. By default, these APIs are set to timeout at 10 minutes. However, with Flex processing, lengthy and complex prompts can take longer than that. The company suggests increasing the timeout will reduce the chances of getting a error.
Additionally, Flex processing might sometimes lack resources to handle developers’ requests, and instead flag the “429 Resource Unavailable” error code. To manage these scenarios, developers can retry requests with exponential backoff, or switch to the default service tier if timely completion is necessary. OpenAI said it will not charge developers when they receive this error.
Currently, the o3 AI model charges $10 (roughly Rs. 854) per million input tokens and $40 (roughly Rs. 3,418) per million output tokens in the standard mode. The Flex processing brings down the input cost to $5 (roughly Rs. 427) and the output cost to $20 (roughly Rs. 1,709). Similarly, the new service tier will charge $0.55 (roughly Rs. 47) per million input tokens and $2.20 (roughly Rs. 188) per million output tokens for the o4-mini AI model, instead of $1.10 (roughly Rs. 94) for input and $4.40 (roughly Rs. 376) for output in the standard mode.
#OpenAI #Brings #Flex #Processing #API #Reduce #Costs #Developers
openai flex processing api cheaper ai usage cost developers introduced openai,api,openai flex processing,ai,artificial intelligence
latest news today, news today, breaking news, latest news today, english news, internet news, top news, oxbig, oxbig news, oxbig news network, oxbig news today, news by oxbig, oxbig media, oxbig network, oxbig news media
HINDI NEWS
News Source