A disallow only prohibits reading non-index content (and is not focus on deindexing in the first place).
A note : a <meta name=»robots content=»noindex is not the same as a disallow in robots. They mean completely different things.
Disallow prohibits spiders from reading the HTML of the page
They can read the URL, so it can appear in searches with information from other pages and links on the Internet. Also, with a disallow we do not eliminate what pages and it is not unusual that after adding a URL as a disallow we see how the result remains in the search engine for a while. It is definitely not a quick way to deindex, but rather is more gear towards crawling issues and mium or long-term actions.
On the other hand, a noindex in
The meta-robots does allow the spider to crawl the HTML but prevents its results from appearing in Google. It is the opposite effect: the spiders band database continue to waste time on that page but the results disappear from the search engine sooner.
All of this, of course, has its nuances
In the long run, a Disallow will cause deindexing if there are no external links to that page and, on the contrary, a meta-robots set to noindex line graphic menu will end up causing less crawling of that URL, which Google cannot work on at all.
If the content is not read, then the HTML directives are obviously ignor
There’s no point in a disallow+noindex or a disallow+canonical or a disallow+rel-next/prev or a disallow+whatever-in-the-html. Google isn’t going to look at this HTML because we’ve bann it from accessing it, so please spare us the tagging.
The same thing happens (albeit to a lesser extent) with rirects
If I create a 301 rirect from an old URL to a new one and at the same time block the old one via robots.txt, Google should not know that I creat a 301 (because it should not access the URL with 301), so the transfer of authority will not usa data happen efficiently.