OpenAI claims the tool represents a big step toward its overarching goal of developing artificial general intelligence (AGI) that matches (or surpasses) human performance. It says that what takes the tool “tens of minutes” would take a human many hours.
In response to a single query, comparable to “Draw me up a competitive evaluation between streaming platforms,” Deep Research will search the net, analyze the data it encounters, and compile an in depth report that cites its sources. It’s also in a position to draw from files uploaded by users.
OpenAI developed Deep Research using the identical “chain of thought” reinforcement-learning methods it used to create its o1 multistep reasoning model. But while o1 was designed to focus totally on mathematics, coding, or other STEM-based tasks, Deep Research can tackle a far broader range of subjects. It could actually also adjust its responses in response to latest data it comes across in the midst of its research.
This doesn’t mean that Deep Research is immune from the pitfalls that befall other AI models. OpenAI says the agent can sometimes hallucinate facts and present its users with misinformation, albeit at a “notably” lower rate than ChatGPT. And since each query may take between five and half-hour for Deep Research to reply, it’s very compute intensive—the longer it takes to research a question, the more computing power required.
Despite that, Deep Research is now available at no extra cost to subscribers to OpenAI’s paid Pro tier and can soon roll out to its Plus, Team, and Enterprise users.