Entering the Post-Prompting World
Takeaways from Minus One with Amjad Masad and Haya Odeh
We’re entering the next era of AI: post-prompting. As we reach the apex of manual prompting capabilities, the actual frontier lies in autonomous agents and spatial canvases. You should not have to coax an AI step-by-step through a build. You should drop it in a repository, give it a directive, and let it operate in the background. It should push solutions to you, not wait to be pulled.
We spoke with Replit co-founders Amjad Masad and Haya Odeh last month at SPC. The future is clear: we’re entering a new era of autonomy.
Infinite coherence unlocks true autonomy. A year ago, agents lost their minds after an hour of operation. Today, with embedded evaluation loops and unit tests, models can stay on track indefinitely.
The pre-training plateau forced a pivot. The current leap in agent capabilities is driven by reinforcement learning and models reasoning through their own compressed context windows.
Agents must push, not pull. You should not have to coax an AI step-by-step through a build. Autonomous agents must operate in the background, understand your organization, and proactively deliver solutions.
The defining bottleneck is amnesia. Today's interfaces force you into a tedious loop—re-explaining your project, re-arguing your intent, micromanaging a tool that forgets everything between sessions. The winning systems will passively retain context, infer intent from your usage patterns, and shift the dynamic from negotiation to expectation.
The UI is shifting to the spatial canvas. Typing is too slow. The future of interaction is multi-modal. It looks less like a terminal and more like Iron Man’s lab: a mix of voice, touch, and spatial manipulation.
The big labs abandoned UX. OpenAI and Anthropic are building raw engines. They are ignoring the user experience to focus on model scale. This leaves a massive, lucrative space for founders to invent the actual interface.
Consolidate or die. Context switching kills velocity. The winning developer environment places the mobile app, the backend, and the investor pitch deck onto a single, unified canvas.
Syntax is dead; logic is king. The role of the engineer switches permanently when AI graduates from a Q&A chatbot to a persistent background operator. You start orchestrating complex systems instead of boilerplate.
Vibe coding weaponizes domain expertise. The profile of a builder changes when natural interaction replaces rigid prompt engineering. The next great applications will be built by domain experts solving their own complex problems.
Watch the full Minus One talk with Amjad and Haya, out now on YouTube.
Minus One is series about the winding journeys the world’s most interesting people take to becoming great—and what they do when figuring out a question we all face: What’s Next? Because before you launch at Zero, you have to figure out what to launch at Minus One. Hosted by South Park Commons Partners.
Interested in SPC? Apply to join us here.



Beautifully articulated insights! I am yet to watch the full episode but the takeaways are on point. On my watch list. The insights on UI and UX and a single unified canvas - a surface - is indeed the next big thing.