Jump to content

GPT-4o

From Wikipedia, the free encyclopedia
(Redirected from GPT-4o mini)
Generative Pre-trained Transformer 4 Omni (GPT-4o)
Developer(s)OpenAI
Initial releaseMay 13, 2024; 7 months ago (2024-05-13)
PredecessorGPT-4 Turbo
SuccessorOpenAI o1
Type
LicenseProprietary
Websiteopenai.com/index/hello-gpt-4o

GPT-4o ("o" for "omni") is a multilingual, multimodal generative pre-trained transformer developed by OpenAI and released in May 2024.[1] GPT-4o is free, but with a usage limit that is five times higher for ChatGPT Plus subscribers.[2] It can process and generate text, images and audio.[3] Its application programming interface (API) is twice as fast and half the price of its predecessor, GPT-4 Turbo.[1]

Background

[edit]

Multiple versions of GPT-4o were originally secretly launched under different names on Large Model Systems Organization's (LMSYS) Chatbot Arena as three different models. These three models were called gpt2-chatbot, im-a-good-gpt2-chatbot, and im-also-a-good-gpt2-chatbot.[4] On 7 May 2024, Sam Altman tweeted "im-a-good-gpt2-chatbot", which was commonly interpreted as a confirmation that these were new OpenAI models being A/B tested.[5]

Capabilities

[edit]

GPT-4o achieved state-of-the-art results in voice, multilingual, and vision benchmarks, setting new records in audio speech recognition and translation.[6][7] GPT-4o scored 88.7 on the Massive Multitask Language Understanding (MMLU) benchmark compared to 86.5 by GPT-4.[8] Unlike GPT-3.5 and GPT-4, which rely on other models to process sound, GPT-4o natively supports voice-to-voice.[8] Sam Altman noted on 15 May 2024 that GPT-4o's voice-to-voice capabilities were not yet integrated into ChatGPT, and that the old version was still being used.[9] This new mode, called Advanced Voice Mode, is currently in limited alpha release[10] and is based on the 4o-audio-preview.[11] On 1 October 2024, the Realtime API was introduced.[12]

The model supports over 50 languages,[1] which OpenAI claims cover over 97% of speakers.[13] Mira Murati demonstrated the model's multilingual capability by speaking Italian to the model and having it translate between English and Italian during the live-streamed OpenAI demonstration event on 13 May 2024. In addition, the new tokenizer[14] uses fewer tokens for certain languages, especially languages that are not based on the Latin alphabet, making it cheaper for those languages.[8]

GPT-4o has knowledge up to October 2023,[15][16] but can access the Internet if up-to-date information is needed. It has a context length of 128k tokens[15] with an output token limit capped to 4,096,[16] and after a later update (gpt-4o-2024-08-06) to 16,384.[17]

As of May 2024, it is the leading model in the LMSYS Elo Arena Benchmarks by the University of California, Berkeley.[18]

Corporate customization

[edit]

In August 2024, OpenAI introduced a new feature allowing corporate customers to customize GPT-4o using proprietary company data. This customization, known as fine-tuning, enables businesses to adapt GPT-4o to specific tasks or industries, enhancing its utility in areas like customer service and specialized knowledge domains. Previously, fine-tuning was available only on the less powerful model GPT-4o mini.[19][20]

The fine-tuning process requires customers to upload their data to OpenAI's servers, with the training typically taking one to two hours. Initially, the customization will be limited to text-based data. OpenAI's focus with this rollout is to reduce the complexity and effort required for businesses to tailor AI solutions to their needs, potentially increasing the adoption and effectiveness of AI in corporate environments.[21][19]

GPT-4o mini

[edit]

On July 18, 2024, OpenAI released a smaller and cheaper version, GPT-4o mini.[22]

According to OpenAI, its low cost is expected to be particularly useful for companies, startups, and developers that seek to integrate it into their services, which often make a high number of API calls. Its API costs $0.15 per million input tokens and $0.6 per million output tokens, compared to $5 and $15, respectively, for GPT-4o. It is also significantly more capable and 60% cheaper than GPT-3.5 Turbo, which it replaced on the ChatGPT interface.[22] The price after fine-tuning doubles: $0.3 per million input tokens and $1.2 per million output tokens.[23]

GPT-4o mini is the default model for users not logged in who use ChatGPT as guests and those who have hit the limit for GPT-4o.

GPT-4o mini will become available in fall 2024 on Apple's mobile devices and Mac desktops, through the Apple Intelligence feature.[22]

Scarlett Johansson controversy

[edit]

As released, GPT-4o offered five voices: Breeze, Cove, Ember, Juniper, and Sky. A similarity between the voice of American actress Scarlett Johansson and Sky was quickly noticed. On May 14, Entertainment Weekly asked themselves whether this likeness was on purpose.[24] On May 18, Johansson's husband, Colin Jost, joked about the similarity in a segment on Saturday Night Live.[25] On May 20, 2024, OpenAI disabled the Sky voice, issuing a statement saying "We've heard questions about how we chose the voices in ChatGPT, especially Sky. We are working to pause the use of Sky while we address them."[26]

Scarlett Johansson starred in the 2013 sci-fi movie Her, playing Samantha, an artificially intelligent virtual assistant personified by a female voice. As part of the promotion leading up to the release of GPT-4o, Sam Altman on May 13 tweeted a single word: "her".[27][28]

OpenAI stated that each voice was based on the voice work of a hired actor. According to OpenAI, "Sky's voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice."[26] CTO Mira Murati stated "I don't know about the voice. I actually had to go and listen to Scarlett Johansson's voice." OpenAI further stated the voice talent was recruited before reaching out to Johansson.[28][29]

On May 21, Johansson issued a statement explaining that OpenAI had repeatedly offered to make her a deal to gain permission to use her voice as early as nine months prior to release, a deal she rejected. She said she was "shocked, angered, and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference." In the statement, Johansson also used the incident to draw attention to the lack of legal safeguards around the use of creative work to power leading AI tools, as her legal counsel demanded OpenAI detail the specifics of how the Sky voice was created.[28][30]

Observers noted similarities to how Johansson had previously sued and settled with The Walt Disney Company for breach of contract over the direct-to-streaming rollout of her Marvel film Black Widow,[31] a settlement widely speculated to have netted her around $40M.[32]

Also on May 21, Shira Ovide at The Washington Post shared her list of "most bone-headed self-owns" by technology companies, with the decision to go ahead with a Johansson sound-alike voice despite her opposition and then denying the similarities ranking 6th.[33] On May 24, Derek Robertson at Politico wrote about the "massive backlash", concluding that "appropriating the voice of one of the world's most famous movie stars — in reference [...] to a film that serves as a cautionary tale about over-reliance on AI — is unlikely to help shift the public back into [Sam Altman's] corner anytime soon."[34]

See also

[edit]

References

[edit]
  1. ^ a b c Wiggers, Kyle (2024-05-13). "OpenAI debuts GPT-4o 'omni' model now powering ChatGPT". TechCrunch. Retrieved 2024-05-13.
  2. ^ Field, Hayden (2024-05-13). "OpenAI launches new AI model GPT-4o and desktop version of ChatGPT". CNBC. Retrieved 2024-05-14.
  3. ^ Colburn, Thomas. "OpenAI unveils GPT-4o, a fresh multimodal AI flagship model". The Register. Retrieved 2024-05-18.
  4. ^ Edwards, Benj (2024-05-13). "Before launching, GPT-4o broke records on chatbot leaderboard under a secret name". Ars Technica. Retrieved 2024-05-17.
  5. ^ Zeff, Maxwell (2024-05-07). "Powerful New Chatbot Mysteriously Returns in the Middle of the Night". Gizmodo. Retrieved 2024-05-17.
  6. ^ van Rijmenam, Mark (13 May 2024). "OpenAI Launched GPT-4o: The Future of AI Interactions Is Here". The Digital Speaker. Retrieved 17 May 2024.
  7. ^ Daws, Ryan (2024-05-14). "GPT-4o delivers human-like AI interaction with text, audio, and vision integration". AI News. Retrieved 2024-05-18.
  8. ^ a b c "Hello GPT-4o". OpenAI.
  9. ^ "OpenAI GPT-4o: How to access GPT-4o voice mode; insights from Sam Altman". The Times of India. 2024-05-16. ISSN 0971-8257. Retrieved 2024-05-18.
  10. ^ Morrison, Ryan (2024-07-19). "OpenAI to make GPT-4o Advanced Voice available by the end of the month to select group of users". Tom's Guide. Retrieved 2024-09-10.
  11. ^ "Pricing". openai.com. Retrieved 2024-11-29.
  12. ^ "Introducing the Realtime API". openai.com. Retrieved 2024-11-29.
  13. ^ Edwards, Benj (2024-05-13). "Major ChatGPT-4o update allows audio-video talks with an "emotional" AI chatbot". Ars Technica. Retrieved 2024-05-17.
  14. ^ "OpenAI Platform". platform.openai.com. Retrieved 2024-11-29.
  15. ^ a b "Models - OpenAI API". OpenAI. Retrieved 17 May 2024.
  16. ^ a b Conway, Adam (2024-05-13). "What is GPT-4o? Everything you need to know about the new OpenAI model that everyone can use for free". XDA Developers. Retrieved 2024-05-17.
  17. ^ "Models".
  18. ^ Franzen, Carl (2024-05-13). "OpenAI announces new free model GPT-4o and ChatGPT for desktop". VentureBeat. Retrieved 2024-05-18.
  19. ^ a b "OpenAI lets companies customise its most powerful AI model". South China Morning Post. 2024-08-21. Retrieved 2024-08-22.
  20. ^ "OpenAI to Let Companies Customize Its Most Powerful AI Model". Bloomberg. 2024-08-20. Retrieved 2024-08-22.
  21. ^ The Hindu Bureau (2024-08-21). "OpenAI will let businesses customise GPT-4o for specific use cases". The Hindu. ISSN 0971-751X. Retrieved 2024-08-22.
  22. ^ a b c Franzen, Carl (2024-07-18). "OpenAI unveils GPT-4o mini — a smaller, much cheaper multimodal AI model". VentureBeat. Retrieved 2024-07-18.
  23. ^ "OpenAI Pricing".
  24. ^ Stenzel, Wesley (May 14, 2024). "ChatGPT launching talking AI that sounds exactly like Scarlett Johansson in 'Her' — on purpose?". Entertainment Weekly. Retrieved 2024-05-21.
  25. ^ Caruso, Nick (2024-05-20). "Scarlett Johansson Says She Was 'Shocked, Angered and in Disbelief' After Hearing ChatGPT Voice That Sounds Like Her — Read Statement". TVLine. Retrieved 2024-05-21.
  26. ^ a b "How the voices for ChatGPT were chosen". OpenAI. May 19, 2024.
  27. ^ "her". X (formerly Twitter). May 13, 2024. Retrieved 2024-05-21.
  28. ^ a b c Allyn, Bobby (May 20, 2024). "Scarlett Johansson says she is 'shocked, angered' over new ChatGPT voice". NPR.
  29. ^ Tiku, Nitasha (May 23, 2024). "OpenAI didn't copy Scarlett Johansson's voice for ChatGPT, records show". The Washington Post. Retrieved November 29, 2024.
  30. ^ Mickle, Tripp (2024-05-20). "Scarlett Johansson Said No, but OpenAI's Virtual Assistant Sounds Just Like Her". The New York Times. ISSN 0362-4331. Retrieved 2024-05-21.
  31. ^ "Scarlett Johansson took on Disney. Now she's battling OpenAI over a ChatGPT voice that sounds like hers". Yahoo Finance. 2024-05-21. Retrieved 2024-05-21.
  32. ^ Pulver, Andrew (2021-10-01). "Scarlett Johansson settles Black Widow lawsuit with Disney". The Guardian. ISSN 0261-3077. Retrieved 2024-05-21.
  33. ^ Ovide, Shira (30 May 2024). "Exactly how stupid was what OpenAI did to Scarlett Johansson?". The Washington Post.
  34. ^ Robertson, Derek (May 22, 2024). "Sam Altman's Scarlett Johansson Blunder Just Made AI a Harder Sell in DC". Politico.