GPT-4o is out and together with it an array of new capabilities.
Here are some key take-aways from the OpenAI Spring Update:
Release of new Flagship Model: "GPT-4o"
- more natural interaction, less friction
- now supporting voice and vision - speak to GPT more naturally and upload multimedia content
- Memory function - save items for future conversations
- Advanced Data Analysis (not further specified)
- language supoprt increaded to 50 languages
- GPT Store: even more GPTs available (
GPT-4o will also be available for the API
- 2x faster (allegedly)
- 50% cheaper (allegedly
New capabilities
- AI voice response
- can detect your emotions through voice and text recognition
- can now function as a real-time interpreter (translate everything you hear in Italian into English, and everything you hear in English into Italian)
For more information visit openai.com