- NanoBits
- Posts
- OpenAI's DevDay 2025: here's everything you missed š¤
OpenAI's DevDay 2025: here's everything you missed š¤
Nanobits Product Spotlight

EDITORāS NOTE
Dear future-proof humans,
On October 5th, Sam Altman opened DevDay 2025 with a clear sense of purpose: this yearās event isnāt just about new features, itās about redefining how AI fits into everything else. His keynote framed the announcements as steps in a broader transition, from āmodel-firstā to āplatform-first.ā The announcements, demos, and partnerships revealed a bold ambition: embed AI deeper into how we live, work, and build.
āWe meant to build a super assistant ⦠and we got a little sidetracked. Weāre entering a new era of how software gets written.ā
At the heart of the event, Sam Altman sketched a vision of a future where AI doesnāt just answer questions (reactive AI), but performs tasks, runs apps, and coordinates across tools (agentic AI). That ambition surfaced in everything from the new ChatGPT apps SDK to AgentKit, from upgrades in Codex to hints of hardware collaboration with Jony Ive.
In a recent interview, Altman said, āWeāre trying to build very capable AI ⦠and then deploy it in a way that really benefits people and they can use it for all sorts of things.ā He also pointed out that the next generation of tools should let even non-programmers shape software. In one anecdote, Altman told of an 89-year-old in Japan who learned to build iPhone apps for the elderly users with the help of ChatGPT.
If this DevDay is a turning point, then the signal it sends is that software as we know it is being remodeled around intelligence. Letās see how far that remodeling might go.
WHAT WAS SHIPPED AND WHY IT MATTERS
DevDay 2025 was packed with launches across multiple layers: user features, developer tools, models, and infrastructure. Many of these moves reinforce a singular aim: to collapse the distance between you ask and it acts. Here is a breakdown of the major new products and capabilities.
Apps in ChatGPT & the Apps SDK
OpenAI has folded apps into ChatGPT itself, shifting the interface from āchat plus web linksā to a conversational app shell. Itās beginning to resemble an āAI OS.ā Rather than pinging out to external sites, users can now run partner apps directly within the chat window. Developers get early access to the Apps SDK, built on the open Model Context Protocol (MCP). Through that SDK, they define both the appās logic and its interface, enabling deeper integration of services.
Developers are now part of OpenAIās distribution engine; the more apps built on ChatGPT, the richer the user experience, and the stickier the platform becomes. That shift pressures other AI players to think beyond models: platform controls (UX, monetization, discovery) are becoming as important as the quality of model output.
Discovery is baked in; ChatGPT will suggest apps contextually as users converse, and later this year OpenAI plans to accept app submissions and host a directory of curated apps. Monetization isnāt far behind either; OpenAI intends to support in-ChatGPT commerce, allowing users to transact (āinstant checkoutā) within the same flow. While I am all for innovation and gung-ho about this feature, the commerce protocol and in-ChatGPT purchases could strain user trust, if mishandled.

Prompt used: I want to buy stoneware dinnerware from Etsy
At launch, supporting apps include Spotify, Canva, Zillow, Coursera, Booking, Expedia, and more. This move reframes ChatGPT, not as a chatbot, but as a universal app platform. For users, it means fewer context shifts; for developers, it opens direct access to 800 million weekly users.
In the next iteration, I used a slightly more complex prompt:
If apps inside ChatGPT start replacing standalone apps, OpenAI is pushing into territory held by Apple, Google, platform owners. Does this mean we are closer to an era of mobile phones with no apps; just an intelligent AI assistant seamlessly handling everything?
Razorpay launched Agentic Payments with the Razorpay connector on ChatGPT, which further solidifies my belief that an app-less phone is becoming a reality sooner than ever.
AI made its first payment in India - seamlessly - no redirects, no hassles.
We just piloted Razorpay Agentic Payments with @NPCI_NPCI & @OpenAI at #GFF2025.
You chat ā the AI shops ā you confirm ā it pays.
India didnāt wait for the future. It switched it on. šā Harshil Mathur (@harshilmathur)
8:22 AM ⢠Oct 9, 2025
With an app ecosystem, itās imperative that OpenAI must moderate apps while letting them innovate. So, they published developer guidelines and draft standards for app submission and app criteria (design, performance), so quality and control are baked in.
AGENTKIT: TOOLS FOR AGENTIC WORKFLOWS
This was one of my favourite launches. It is a toolkit for building production-grade AI agents (task-oriented, multi-step workflows). The kit includes tools for defining agent logic, prompts, tool interfaces, error handling, fallback strategies, and monitoring. AgentKit is meant to bridge the gap between prototypes and reliable agents. It gives devs more guardrails and scaffolding for real use cases (versus ad hoc prompt chaining).
Well, I created something too.
Executive Personal Assistant Agent
The personal assistant agent is utilized as a smart task manager to help users manage and prioritize complex daily schedules. It starts with understanding the user's intent, performing necessary searches (such as checking local traffic or transport conditions), and arranging to-dos. The agent provides a structured daily plan, often presented as a color-coded to-do list with suggested transportation, delivered inside an interactive widget.
Advanced Customer Support Agent
The primary use case of this customer support agent is to automate customer service, ensuring that every person visiting a website instantly gets the right answer without waiting for tickets. This AI-powered agent pulls answers directly from the company's knowledge base or uploaded company files, allowing it to know the product inside out. It is designed to deliver clear, structured, and concise conversational responses using formats like steps or bullets, making the output feel like a professional SaaS help desk. Ultimately, this provides a reliable, efficient, and scalable support solution for the business.
You can add more files to the knowledge base to make your agent more robust in its responses, add a guardrail node to specify how to handle queries requesting PII or sensitive information (for instance, credit card details), and include instructions in the system prompt for dealing with user questions that are out of scope.
CODEX GOES FULLY SUPPORTED
Codex is now generally available (GA) rather than a preview. With GA comes enhancements: a dedicated Codex SDK, enterprise controls (for admin, security, monitoring), and integrations like Slack. According to some sources, the general availability of Codex may be the deepest leverage point behind many of OpenAIās other announcements. I really loved what Romain did with the camera and the lights at the venue by simply sketching out the interface and letting Codex code the entire interaction.
NEW, SPECIALIZED MODELS THAT ARE LEANER AND CHEAPER
OpenAI introduced or spotlighted several model variants aimed at balancing cost, capability, and specialization. It launched GPT-5 Pro, which is a high-precision model for use cases demanding strong and deeper reasoning, accuracy, and reliability. Itās designed for high-stakes domains where errors can cost money, reputation, or even lives, such as legal contract review, medical diagnosis, or financial compliance.
Then, gpt-realtime-mini, which is a lighter voice model, ~70 % cheaper than the large model, optimized for real-time use cases. It also launched gpt-image-1-min,i which is a smaller image generation model, ~80 % cheaper than the larger image models.
But the headline grabber was the new generation of Sora, integrating video generation into the API. OpenAIās Sora 2 model now supports video and audio generation via an API, and the company bundled it into a consumer app aimed squarely at short-form media. You can create clips, refine them with natural-language prompts, overlay soundtracks, and share them, all within an interface that feels straight out of the TikTok playbook.
TikTokās moat has always been its recommendation algorithm and cultural resonance, not how hard it is to make videos. Layering generative video on top doesnāt erase that, but it does open doors: niche creators, micro-educational content, and experiments that donāt need big budgets or celebrity clout. On the flip side, brands could flood feeds with synthetic content, reshaping the dynamics of supply and attention.
The real power is the API; developers can programmatically generate video. Imagine news outlets producing visual explainers immediately, or training platforms creating custom video modules on the fly. Everything then hinges on how well the model handles edge cases and whether the output quality is trustworthy.
If Sora 2 can create convincing videos instantly, the balance of power in the creator economy could shift. Will content become commoditized when you can create a three-minute explainer with a prompt? Or will the bottleneck move, from creating to curating and distributing quality work?
Having said all that, these model variations signal a move toward graded models: high-end for critical tasks, and leaner models for more frequent, lowerācost use. Although there is a fragmentation risk as well, where too many model variants can confuse users or devs about which to pick.
INFRASTRUCTURE / COMPUTE / PARTNERSHIPS
More powerful models and features require more infrastructure, making partnerships and supply chain critical. OpenAI disclosed a major AMD partnership: a multi-gigawatt compute agreement to scale GPU infrastructure. They already had existing deals with NVIDIA; this diversification indicates that compute supply is a key bottleneck. On the hardware front, OpenAI earlier acquired io, the hardware design company of Jony Ive, to develop future devices.
These product moves donāt exist in isolation; they point toward a more integrated future, where ChatGPT is no longer just an interface but the platform itself. With apps embedded in ChatGPT, AgentKit scaffolding agent creation, and model variants for cost, speed, and quality, OpenAI is assembling the pillars of an AI-first platform. While early, this shift signals that the most consequential competition will no longer be just āwhich model is best,ā but āwhich platform can reliably host, scale, and govern intelligent agents.ā
END NOTE
DevDay 2025 is a signal: weāre no longer in ājust better modelsā mode. OpenAI is building toward being the substrate of software, where intelligence is the platform, not just a feature.
It showed whatās possible: apps inside chats, agents that act, new model tiers, and hints of hardware, but also how far the journey still is. The strength of the vision matters less than the strength of the execution: reliability, safety, governance, cost, and maintainability will decide who wins.
As a reader, hereās what I hope you take away:
OpenAI is staking a claim on the center of the AI ecosystem.
Many of the exciting demos assume near-perfect conditions. The real wars will be in the edges, the ālast 10 %ā of robust system behavior.
Much of the control (monetization, discovery, hosting) resides with OpenAI. Developers and enterprises should build flexibly.
The regulatory, safety, and compute challenges are real constraints, not afterthoughts.
So hereās what I encourage you to do next:
Experiment now: try building a mini agent or ChatGPT app. The tools released are early, but getting your hands dirty will help you see trade-offs.
Design for portability: avoid deep coupling to proprietary layers in the early days; maintain modularity so you can adapt if platform rules change.
Demand observability & safety: in your own projects, insist on logging, traceability, fallback paths, audit trails, and human oversight.
Watch carefully: track OpenAIās policies (monetization rules, app review, compute pricing), regulatory shifts, and competitor countermoves.
DevDay 2025 doesnāt guarantee success, but it raises the bar. OpenAI is no longer content with being āone pillar in the AI stack,ā it is trying to become the foundation beneath software itself. That means the pressure is on: architectures, policies, tool ecosystems, governance, and cost models must all evolve together.
If OpenAI can execute (and avoid overextending), the next decade of AI might look less like a pursuit of ābetter modelsā and more like a race in how you connect agents, experiences, and value. If it stumbles (on cost, safety, regulation, centralization), the backlash could be sharp.
This is the moment when ambition meets implementation, and success will favor those who bridge bold thinking with disciplined execution.
Share the love ā¤ļø Tell your friends!
If you liked our newsletter, share this link with your friends and request them to subscribe too.
Check out our website to get the latest updates in AI
Reply