For those who have been following this blog series, sorry for a late post on the updates about Python Open Labs.
Last week we covered some basics about web scraping with python, but before I start let me make a customary disclaimer.
So, getting along with the updates. In a nutshell web scraping can be described as a way of extracting useful relevant information from web pages i.e html pages. This can be abstracted into following steps:
- Downloading the web page content (user urllib or requests module in python)
- View page source in a web browser to examine the html structure of web page and locate information of interest for your task at hand
- Try to figure out the html structuring such as class, id, html tag etc that will help your python script locate the information.
- Use the beautifulsoup python module to parse and reach as close as possible to the relevant information in the html page structure and then extract the information using string methods.
The steps 2 – 4 go hand in hand, i.e one helps you build more upon the other. For example, the more you understand about the html structure surrounding your page the more specific inputs you can provide to beautifulsoup methods to extract out the information.
For the previous I have uploaded the sample python files with commented code lines on the Google Drive link mentioned below which you can access under Session – 16 folder. Make sure you work through those. Doubts, queries, feedbacks are always welcome 🙂
All of the course slides and examples are made available on: https://goo.gl/YP0c2E
We will be continuing with the web scraping lecture on march 31, 2017 after which I will also upload a comprehensive document with some additional relevant sources and more interesting code.
Happy Scraping !
See you next Friday from 1:30 PM – 3:30 PM at DSSC (Room – 215), Lehman Library at Columbia SIPA !