This means that you can disallow search engines from visiting certain pages, but this isn’t the method to actually prevent them from being indexed, which is done by no-index directives.
Anyway, usually you change a robots.txt file to prevent search engines from visiting certain URLs, because they’re actually wasting crawl budget. A website’s internal search engine that creates parameters is an example of this. For example:
On the search page https://www.example.com/search/ you type in seo hustlers, something like this is likely to happen: https://www.example.com/search/?q=seo+hustlers. This isn’t an existing page, just a result page from the internal search engine. And imagine that anyone can be typed in this search box. Do you want those parameter URLs to be found? NO, because they’re irrelevant.
Therefore you set rules and make sure the search engine knows what pages not to visit (and also what they do need to visit). This will look something like this: