On the 5th day of OpenAI, ChatGPT partners with Apple for iOS 18.2

OpenAI displayed on a phone

With the holiday season upon us, many companies are finding ways to take advantage through deals, promotions, or other campaigns. OpenAI has found a way to participate with its "12 days of OpenAI" event series.

On Wednesday, OpenAI announced via an X post that starting on Dec. 5, the company would host 12 days of live streams and release "a bunch of new things, big and small," according to the post.

Also: OpenAI's Sora AI video generator is here – how to try it

Here's everything you need to know about the campaign, as well as a round-up of every day's drops.

What are the '12 days of OpenAI'?

OpenAI CEO Sam Altman shared a bit more details about the event, which kicked off at 10 a.m. PT on Dec. 5 and will occur daily for 12 weekdays with a live stream featuring a launch or demo. The launches will be both "big ones" or "stocking stuffers," according to Altman.

What has been dropped so far?

Wednesday, December 11

Apple released iOS 18.2 today. The release includes integrations with ChatGPT across Siri, Writing Tools, and Visual Intelligence. As a result, today's live stream focused on walking through the integration.

  • Siri can now recognize when you ask questions outside its scope that could benefit from being answered by ChatGPT instead. In those instances, it will ask if you'd like to process the query using ChatGPT. Before any request is sent to ChatGPT, a message notifying the user and asking for permission will always appear, placing control in the user's hands as much as possible.
  • Visual Intelligence refers to a new feature for the iPhone 16 lineup that users can access by tapping the Camera Control button. Once the camera is open, users can point it at something and search the web with Google, or use ChatGPT to learn more about what they are viewing or perform other tasks such as translating or summarizing text.
  • Writing Tools now features a new "Compose" tool, which allows users to create text from scratch by leveraging ChatGPT. With the feature, users can even generate images using DALL-E.

All of the above features are subject to ChatGPT's daily usage limits, the same way that users would reach limits while using the free version of the model on ChatGPT. Users can choose whether or not to enable the ChatGPT integration in Settings.

Read more about it here: iOS 18.2 rolls out to iPhones: Try these 6 new AI features today

Tuesday, December 10

  • Canvas is coming to all web users, regardless of plan, in GPT-4o, meaning it is no longer just available in beta for ChatGPT Plus users.
  • Canvas has been built into GPT-4o natively, meaning you can just call on Canvas instead of having to go to the toggle on the model selector.
  • The Canvas interface is the same as what users saw in beta in ChatGPT Plus, with a table on the left hand side that shows the Q+A exchange and a right-hand tab that shows your project, displaying all of the edits as they go, as well as shortcuts.
  • Canvas can also be used with custom GPTs. It is turned on by default when creating a new one, and there is an option to add Canvas to existing GPTs.
  • Canvas also has the ability to run Python code directly in Canvas, allowing ChatGPT to execute coding tasks such as fixing bugs.

Read more about it here: I'm a ChatGPT power user – and Canvas is still my favorite productivity feature a month later

Monday, December 9

OpenAI teased the third-day announcement as "something you've been waiting for," followed by the much-anticipated drop of its video model — Sora. Here's what you need to know:

  • Known as Sora Turbo, the video model is smarter than the February model that was previewed.
  • Access is coming in the US later today; users need only ChatGPT Plus and Pro.
  • Sora can generate video-to-video, text-to-video, and more.
  • ChatGPT Plus users can generate up to 50 videos per month at 480p resolution or fewer videos at 720p. The Pro Plan offers 10x more usage.
  • The new model is smarter and cheaper than the previewed February model.
  • Sora features an explore page where users can view each other's creations. Users can click on any video to see how it was created.
  • A live demo showed the model in use. The demo-ers entered a prompt and picked aspect ratio, duration, and even presets. I found the live demo video results to be realistic and stunning.
  • OpenAI also unveiled Storyboard, a tool that lets users generate inputs for every frame in a sequence.

Friday, December 6:

On the second day of "shipmas," OpenAI expanded access to its Reinforcement Fine-Tuning Research Program:

  • The Reinforcement Fine-Tuning program allows developers and machine learning engineers to fine-tune OpenAI models to "excel at specific sets of complex, domain-specific tasks," according to OpenAI.
  • Reinforcement Fine-Tuning refers to a customization technique in which developers can define a model's behavior by inputting tasks and grading the output. The model then uses this feedback as a guide to improve, becoming better at reasoning through similar problems, and enhancing overall accuracy.
  • OpenAI encourages research institutes, universities, and enterprises to apply to the program, particularly those that perform narrow sets of complex tasks, could benefit from the assistance of AI, and perform tasks that have an objectively correct answer.
  • Spots are limited; interested applicants can apply by filling out this form.
  • OpenAI aims to make Reinforcement Fine-Tuning publicly available in early 2025.

Thursday, December 5:

OpenAI started with a bang, unveiling two major upgrades to its chatbot: a new tier of ChatGPT subscription, ChatGPT Pro, and the full version of the company's o1 model.

The full version of o1:

  • Will be better for all kinds of prompts, beyond math and science
  • Will make major mistakes about 34% less often than o1-preview, while thinking about 50% faster
  • Rolls out today, replacing o1-preview to all ChatGPT Plus and now Pro users
  • Lets users input images, as seen in the demo, to provide multi-modal reasoning (reasoning on both text and images)

ChatGPT Pro:

  • Is meant for ChatGPT Plus superusers, granting them unlimited access to the best OpenAI has to offer, including unlimited access to OpenAI o1-mini, GPT-4o, and Advanced Mode
  • Features o1 pro mode, which uses more computing to reason through the hardest science and math problems
  • Costs $200 per month

Where can you access the live stream?

The live streams are held on the OpenAI website, and posted to its YouTube channel immediately after. To make access easier, OpenAI will also post a link to the live stream on its X account 10 minutes before it starts, which will be at approximately 10 a.m. PT/1 p.m. ET daily.

What can you expect?

The releases remain a surprise, but many anticipate that Sora, OpenAI's video model initially announced last February, will be launched as part of one of the bigger drops. Since that first announcement, the model has been available to a select group of red teamers and testers and was leaked last week by some testers over grievances about "unpaid labor," according to reports.

Also: OpenAI's o1 lies more than any major AI model. Why that matters

Other rumored releases include a new, fuller version of the company's o1 LLM with more advanced reasoning capabilities, and a Santa voice for OpenAI's Advanced Voice Mode, per code spotted by users only a couple of weeks ago under the codename "Straw."

Artificial Intelligence

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...