OpenAI’s latest releases make one thing very clear: the future of AI tools is both brutally capable and surprisingly playful. GPT‑5.5 pushes the frontier on reasoning, coding and complex “agentic” workflows, while Codex — OpenAI’s coding assistant — just got animated AI pets that live on your desktop like digital companions for developers.
At the same time, OpenAI is broadening its strategy with gpt‑oss, a family of open‑weight models (gpt‑oss‑120b and gpt‑oss‑20b) released under the permissive Apache 2.0 license, designed for strong reasoning, tool use and local or self‑hosted deployment. Taken together, these launches say a lot about where OpenAI wants to be in the stack: powering everything from consumer agents to low‑level infrastructure.
Codex just got pets — literally
On April 30, the OpenAI Developers account quietly posted: “Pets. Now in Codex. Use /pet to wake your pet.” — along with a short clip of a small mascot floating next to a Codex window. Under the hood, this is a new UI layer inside Codex that lets you summon a tiny animated companion that hovers as an overlay around your workspace while the coding agent works.
Engadget reports that users can type /pet inside the Codex app to summon or dismiss this companion, and choose from eight built‑in characters — with the option to generate their own using AI. A second command, /hatch, invokes a “Hatch Pet” skill that lets you describe the companion you want (for example a “cute goblin pet” or a parody of Microsoft Clippy) and have Codex generate an animated sprite sheet that then lives as your persistent pet.
From the outside, this absolutely looks like a toy — it is literally a little character watching you code — but the UX idea behind it is serious.
Why a silly pet is actually a serious UX pattern
Coding agents tend to run in the background, which creates a visibility problem: is the agent still thinking, waiting for input, or stuck on an error? OpenAI’s pet overlay acts as a status surface, giving you ambient feedback on what Codex is doing without forcing you to switch back to the main thread every few seconds.
Videos and early breakdowns show that the pets sit inside Codex’s growing ecosystem of commands and skills, alongside things like planning modes and personality settings. That means the pet isn’t just a GIF; it’s integrated into the agentic workflow that Codex is evolving toward — a persistent workspace with tasks, tools, and habits rather than a one‑off autocomplete.
For developers, that matters for a few reasons:
- It increases trust: when you can see the agent’s “mood” or state, it feels less like a black box.
- It reduces cognitive load: you can keep your editor focused while still tracking background work.
- It adds emotional texture: a small bit of personality makes a long debugging session feel less sterile.
In other words, OpenAI is quietly experimenting with the emotional UX layer of agents — and doing it in a way that’s extensible via skills like Hatch Pet.
GPT‑5.5: the “serious” half of the story
On the heavy‑duty side, OpenAI recently launched GPT‑5.5, described as its “smartest and most intuitive to use” model so far. The model was released on April 23, 2026, and is available to paying users across ChatGPT (Plus, Pro, Business, Enterprise) and Codex, with API access promised “very soon.”
According to OpenAI and coverage from major outlets, GPT‑5.5 is designed specifically for:
- Agentic coding and computer use — having the model navigate UIs, operate software and perform multi‑step tasks.
- Deeper research and knowledge work — handling longer‑form, multi‑document workflows with better planning and tool use.
- Technical domains like mathematics and early‑stage scientific research, where it shows improved benchmark performance.
OpenAI reports that GPT‑5.5 beats both its own previous models and competitors such as Google’s Gemini 3.1 Pro and Anthropic’s Claude Opus on a range of benchmarks, and independent summaries highlight strong scores on reasoning‑heavy suites like Terminal‑Bench 2.0 and FrontierMath. A short launch video from OpenAI frames GPT‑5.5 as “a new class of intelligence for real work and powering agents, built to understand complex goals, use tools, check its work, and carry more tasks through to completion.”
That positioning matters: GPT‑5.5 is not just “more text” — it is clearly optimized to act as the brain behind agents like Codex, which makes the pet layer feel like the UI frosting on a much more serious cake.
gpt‑oss: OpenAI goes properly open‑weight
Alongside its hosted models, OpenAI has also moved into the open‑weight space with gpt‑oss‑120b and gpt‑oss‑20b. These models are:
- Released under the Apache 2.0 license, which is extremely permissive and friendly to commercial use and customization.
- Designed as reasoning‑first models, with strong performance on tool use, function calling and chain‑of‑thought tasks.
- Engineered to run efficiently: gpt‑oss‑120b is structured as a sparse MoE model that fits on a single 80 GB GPU, while gpt‑oss‑20b targets 16 GB setups for on‑device and edge‑style deployments.
Benchmarks shared by OpenAI show gpt‑oss‑120b reaching near‑parity with an internal model (o4‑mini) on reasoning tests, while gpt‑oss‑20b can match or approach the smaller o3‑mini on many tasks, despite its much lower parameter count. Community analysis notes that these models also perform surprisingly well on high‑level science and knowledge benchmarks, not just simple chat.
For developers and companies, this opens three big doors:
- Build locally controlled agents with strong reasoning without sending data to OpenAI’s hosted stack.
- Customize and fine‑tune the models deeply for niche use cases under a permissive license.
- Mix and match: use GPT‑5.5 for frontier agentic tasks via Codex and ChatGPT, and gpt‑oss for cost‑sensitive or privacy‑sensitive workloads.
This is the first time OpenAI has offered open‑weight models at this level of quality and flexibility, and it clearly aims to keep the company relevant even as open ecosystems mature.
What this combo means for developers and founders
Put Codex pets, GPT‑5.5, and gpt‑oss together and you get a clear picture of OpenAI’s roadmap:
- Agents as the default interface. GPT‑5.5 is marketed explicitly as an engine for agentic coding, computer use and complex workflows, not just a chatbox.
- A full stack from hosted to open‑weight. gpt‑oss models can live in your own infra, while GPT‑5.5 and Codex give you a polished managed experience.
- Personality layered on top of power. Codex pets show how OpenAI wants these tools to feel — not just to compute.
If you are a developer or SaaS founder, some very concrete opportunities pop out:
- Developer tools with personality. The Codex pets pattern is easy to borrow: ambient companions that show agent status, teach features, and reward long‑running tasks.
- Vertical agents powered by GPT‑5.5. For example, an internal “research engineer” bot that runs experiments, manages docs, and controls specific tools — GPT‑5.5 is explicitly tuned for that kind of work.
- Privacy‑sensitive copilots using gpt‑oss. On‑prem or hybrid deployments where open‑weight models handle sensitive data, while hosted GPT‑5.5 is reserved for tasks that benefit from frontier capabilities.
The key is that OpenAI is no longer just shipping “a bigger model”; it’s shipping an ecosystem: agents, UIs, open weights and personality all at once.
Getting hands‑on: how to try this stack
If you want to actually play with this as a builder:
In Codex:
- Enable Codex in your OpenAI account and open the app.
- Type
/petin a Codex thread to wake the default pet overlay. - Install the Hatch Pet skill (where available) and run
/hatchwith a description to generate a custom companion.
With GPT‑5.5:
- Use ChatGPT on a paid tier (Plus, Pro, Business, Enterprise) and switch the model to GPT‑5.5 or GPT‑5.5 Pro where available.
- Move real work into it: coding tasks, data analysis, research and multi‑step computer workflows.
With gpt‑oss:
- Pull gpt‑oss‑120b or gpt‑oss‑20b from OpenAI’s official GitHub repo or supported model hubs.
- Deploy either via managed offerings like Vertex AI and MaaS endpoints, or self‑host on your own GPUs following the model card guidelines.
From there, you can start experimenting with the same design patterns OpenAI is using: agents with memory and tools, open‑weight fallbacks, and yes — little digital companions that sit on top.



