As part of the ‘12 days of OpenAI’ launches this December, the ChatGPT-maker announced that it was expanding its reinforcement fine-tuning research programme so that accepted applicants could work on creating expert models tailored to address “complex, domain-specific tasks.”
“This new model customization technique enables developers to customize our models using dozens to thousands of high quality tasks and grade the model’s response with provided reference answers. This technique reinforces how the model reasons through similar problems and improves its accuracy on specific tasks in that domain,” said OpenAI in a blog post.
The company invited research institutes, universities, and enterprises across diverse domains to apply for the programme. Applicants will be able to access OpenAI’s Reinforcement Fine-Tuning API in alpha and will also be asked to provide feedback on the same to the company.
OpenAI added that it hoped to make reinforcement fine-tuning publicly available in early 2025.
On December 5, OpenAI announced a new $200-per-month subscription to let users have unlimited access to the OpenAI o1 model, o1-mini, GPT-4o, and Advanced Voice.
Published – December 07, 2024 12:27 pm IST