The new trend in AI chatbots is to compete to develop deep analysis systems, something that threatens traditional intellectual work
Elon Musk’s new AI, Grok 3, is now official. Among its advertised capabilities is a function called ‘Deep Search’, suspiciously similar to the Deep Research coined by Google and copied by OpenAI. This is to be expected: in recent weeks we have seen almost all the AI giants announcing similar capabilities.
- OpenAI has Deep Research.
- Google presented its own version in Gemini.
- Perplexity has been perfecting this functionality for months.
It is a new trend in AI that goes beyond incremental improvements. These systems can browse the web, analyze multiple sources, synthesize information and produce detailed reports on a subject. And with a level of sophistication that comes dangerously close to the work of many human analysts. In any field.
The difference with traditional requests is huge. Instead of returning a few tokens of information in seconds, they return pages of information in minutes. And it’s nothing like searches either: they don’t return a list of semantically related links, but can understand complex questions, break them down into parts, investigate each aspect by consulting dozens of sources and assemble a coherent analysis citing its references. In less than ten minutes.
The results are impressive. OpenAI said that — and we are verifying that this is basically true — its Deep Research can do in half an hour what would take professional analysts days. And although it makes occasional mistakes (such as a factual slip, or quoting a source that does not exist), the overall quality of the result is good enough for many practical purposes.
This is a shot across the bows of much current intellectual work. Junior level analysts in consultancies, researchers reviewing literature, lawyers preparing preliminary briefs or financial advisors analyzing companies. A large part of their work is compiling, synthesizing and presenting information drawn from many sources.
Like any Deep Research.
It is not that these systems are going to completely replace intellectual workers. They still have significant limitations:
- They cannot access private or unpublished information.
- From time to time they confuse sources or draw erroneous conclusions.
- They lack the expert criteria for certain analyses.
However, they can automate much of the repetitive and “low-level” work that many professionals do today.
This also leads us to a paradox: Deep Research systems are sure to increase the productivity of the most highly qualified workers, who can take advantage of them to enhance their skills; but they put at risk the jobs that used to serve as an entry point, as a training ground for eventually becoming one of these experts.
Deep Research has the potential to alter the career paths of any knowledge-based industry.
It is yet another example of how AI not only automates manual labor, but also is entering territories that we thought were reserved for the human intellect. The question is no longer whether AI can do that intellectual work, but how much of that work will continue to make economic sense if it is done by humans.
There will be companies that, out of ignorance, cynicism or pride, will prefer to ignore these capabilities. They are the ones most exposed to the risk of being left behind. For the rest of us, we still have the task of thinking about how to manage this transition: one that may render obsolete many functions that we thought were immune to automation.