Research Data Services, jointly supported by the Libraries and CUIT, provides support and consulting for research data needs at Columbia University. Helping with many aspects of the research data lifecycle including research data management, finding data, recommendations for cleaning and understanding data, mapping and data visualisation.
For those who have been following this blog series, sorry for a late post on the updates about Python Open Labs.
Last week we covered some basics about web scraping with python, but before I start let me make a customary disclaimer.
So, getting along with the updates. In a nutshell web scraping can be described as a way of extracting useful relevant information from web pages i.e html pages. This can be abstracted into following steps:
Downloading the web page content (user urllib or requests module in python)
View page source in a web browser to examine the html structure of web page and locate information of interest for your task at hand
Try to figure out the html structuring such as class,id, html tag etc that will help your python script locate the information.
Use the beautifulsoup python module to parse and reach as close as possible to the relevant information in the html page structure and then extract the information using string methods.
The steps 2 – 4 go hand in hand, i.e one helps you build more upon the other. For example, the more you understand about the html structure surrounding your page the more specific inputs you can provide to beautifulsoup methods to extract out the information.
For the previous I have uploaded the sample python files with commented code lines on the Google Drive link mentioned below which you can access under Session – 16 folder. Make sure you work through those. Doubts, queries, feedbacks are always welcome 🙂
In the 15th session of Python Open Labs, this week we looked at some miscellaneous topics and revision of basic concepts of file reading and string handling from previous sessions. We also briefly looked into format strings / format specifiers for string construction in Python. The relevant slides are available on the Session – 15 folder on the google drive link mentioned below.
Today we had a brief review session of all the basics of programming that we have covered so far in the Python Open Labs series. During this review we went over reading and writing files, conditional statements, for loops and while loops and various other specifics of programming with Python. This marks a major milestone in the series as all of the material covered so far should be sufficient for basic programming/scripting tasks that you may need
In the next session I will introduce Object Oriented Design with Python.
For those who are getting started with Python, please watch this space for a concise blog post on basics of Python Installation, IDE set up etc in the coming week !
See you next Friday from 1:30 PM – 3:30 PM at DSSC (Room – 215), Lehman Library at Columbia SIPA !
In this session of Python Open Labs we looked at python dictionaries, one of the most powerful data types built into python, optimal for storing in-memory look up tables for fast lookups and search queries.
In the next session we will go through some of the previous concepts as revision and introduce Object Oriented Design with Python.
Welcome back to Python Open Labs at DSSC (Lehman Library, SIPA). This semester we will be moving ahead with our weekly lecture-cum-practise open labs on Python so do join us on Fridays every week from 1:30 PM – 3:30 PM.
In the first session for Spring 2017 semester we revisited some of the concepts on arithmetic operations, conditional statements, assignments statements, operators and basic control flow and function definitions which we had covered in first five sessions of Python Open Labs in Fall 2016 series.