Originally, screen scraping referred to the practice of reading text data from a computer display terminal's screen. This was generally done by reading the terminal's memory through its auxiliary port, or by connecting the terminal output port of one computer system to an input port on another.
In the 1980s financial data providers such as Reuters, Telerate, and Quotron displayed data in 24x80 format intended for a human reader. Users of this data, particularly investment banks, wrote applications to capture and convert this character data as numeric data for inclusion into calculations for trading decisions without re-keying the data. The common term for this practice, especially in the United Kingdom, was page shredding, since the results could be imagined to have passed through a paper shredder.
More modern screen scraping techniques include capturing the bitmap data from the screen and running it through an OCR engine, or in the case of GUI applications, querying the graphical controls by programatically obtaining references to their underlying programming objects.
Bget is a powerful, fast and professional web data extraction software that will extract the data from any type of websites through flexible rules, therefore what you can see through the browser is also what you can get.