o1 |
4549 |
OpenAI’s o1 is designed to reason before it responds and provides world-class capabilities on complex tasks (e.g. science, coding, and math). Improving upon o1-preview and with higher reasoning effort, it is also capable of reasoning through images and supports 200k tokens of input context. By default, uses reasoning_effort of “medium;” ability to customize reasoning effort per message coming soon. |
o1-mini |
452 |
Small version of OpenAI’s o1 model, which is designed to spend more time thinking before it responds but at a better performance profile. Can reason through complex tasks in science, coding, and math. For most tasks, https://poe.com/o3-mini will be better. Supports 128k tokens of context. |
o1-pro |
9070 |
OpenAI’s o1 pro highly capable reasoning model, tailored for complex, compute- or context-heavy tasks, dedicating additional thinking time to deliver more accurate, reliable answers. For less costly, complex tasks, https://poe.com/o3-mini is recommended. |
o3-mini |
266 |
o3-mini is OpenAI’s most recent reasoning model, providing high intelligence on a variety of tasks and domains, including science, math, and coding. This bot uses low reasoning effort by default but can low, medium & high can be selected; supports 200k tokens of input context and 100k tokens of output context. |
o3-mini-high |
552 |
o3-mini-high is OpenAI’s most recent reasoning model with reasoning_effort set to high, providing frontier intelligence on most tasks. Like other models in the o-series, it is designed to excel at science, math, and coding tasks. Supports 200k tokens of input context and 100k tokens of output context. |
DALL-E-3 |
1500 |
OpenAI’s most powerful image generation model. Generates high quality images with intricate details based on the user’s most recent prompt. Use “–aspect” to select an aspect ratio (e.g –aspect 1:1). Valid aspect ratios are 1:1, 7:4, & 4:7. |
GPT-4.5-Preview |
6950 |
A research preview of GPT-4.5, a model designed to be more conversational, empathetic & helpful than past models in the GPT series, and with greater world knowledge. Recommended for use cases related to writing & communication, learning & tutoring, coaching & self-help, and brainstorming. You may experience errors and slower responses at times of peak usage. Supports a 128k token context window. |