In practice, it sometimes does notice the rirect, but generally you lose a lot of authority by doing these things.
Another case that deserves attention is the use of
The meta-robots=noindex directive in combination with other directives. In theory, nothing prevents the possibility of using, for example, both a noindex and a canonical at the same time. However, the interpretation of this combination is extremely ambiguous. In ambiguous situations like this, Google tends to ignore all HTML
Signals out of caution, since it cannot fully rely on them
Therefore, although in theory these combinations are possible, I would not recommend their use, except in the case of “noindex,follow”. Even this combination should be us with caution, since noindex limits the indexing of the page, which makes the use of “follow” contradictory in certain contexts.
The wording of URLs is simple and very investor database concrete, but their reading rules are not as intuitive as they might seem.
We are going to review it because
It is very complicat and people make a lot of mistakes. Each line must start with an order (allow/disallow) and each order must be written very carefully.
The alternatives to avoid crawling/indexing record customers’ offline footprints through robots.txt or meta-robots are not equally powerful.
And that’s how it is
There is nothing more powerful and long-lasting than a robots.txt sentence…
In Google Search Console you can use the delete content tool and these URLs will be delet. But after approximately 90 days Google will forget that you had ask it to delete the URL and, if for whatever reason, it finds it again, it will re-index it. So it is useful usa data for eliminating specific errors, but not for eliminating URLs that will continue to be there.
In Google Search Console you can use the URL parameters tool to indicate whether a content brings changes to the URL or not, but this is not mandatory.