OpenAI’s ChatGPT Can Now ‘Speak,’ Listen, and Process Images

OpenAI’s ChatGPT has received its biggest update since the introduction of GPT-4, allowing users to opt into voice conversations on the mobile app and choose from five different synthetic voices for the bot to respond with. The update also enables users to share images with ChatGPT and highlight areas of focus or analysis, such as “What kinds of clouds are these?”. Here are the key features of the update:

  • Voice conversations: Users can opt into voice conversations on ChatGPT’s mobile app and choose from five different synthetic voices for the bot to respond with.
  • Image processing: Users can share images with ChatGPT and highlight areas of focus or analysis.
  • Synthetic voices: Users can choose from five different synthetic voices for the bot to respond with.

OpenAI’s latest update to ChatGPT is a significant step forward in the development of chatbots that can “see, hear and speak”. The update allows users to engage in voice conversations with the bot, making the experience more natural and intuitive. Additionally, the ability to share images with ChatGPT and highlight areas of focus or analysis is a welcome addition that enhances the bot’s capabilities.

The new features in ChatGPT make it even more powerful and versatile. For example, the ability to speak and listen allows ChatGPT to have natural conversations with users, which is useful for a variety of tasks, such as customer service, education, and entertainment. The ability to process images allows ChatGPT to identify objects, scenes, and activities in images, which can be used for tasks such as image search and image analysis.

OpenAI’s latest update to ChatGPT allows users to opt into voice conversations, choose from five synthetic voices, share images, and highlight areas of focus or analysis. This update is a significant step forward in the development of chatbots that can “see, hear and speak,” and it enhances the user experience by making it more natural and intuitive.