Research Data Services, jointly supported by the Libraries and CUIT, provides support and consulting for research data needs at Columbia University. Helping with many aspects of the research data lifecycle including research data management, finding data, recommendations for cleaning and understanding data, mapping and data visualisation.
For those who have been following this blog series, sorry for a late post on the updates about Python Open Labs.
Last week we covered some basics about web scraping with python, but before I start let me make a customary disclaimer.
So, getting along with the updates. In a nutshell web scraping can be described as a way of extracting useful relevant information from web pages i.e html pages. This can be abstracted into following steps:
Downloading the web page content (user urllib or requests module in python)
View page source in a web browser to examine the html structure of web page and locate information of interest for your task at hand
Try to figure out the html structuring such as class,id, html tag etc that will help your python script locate the information.
Use the beautifulsoup python module to parse and reach as close as possible to the relevant information in the html page structure and then extract the information using string methods.
The steps 2 – 4 go hand in hand, i.e one helps you build more upon the other. For example, the more you understand about the html structure surrounding your page the more specific inputs you can provide to beautifulsoup methods to extract out the information.
For the previous I have uploaded the sample python files with commented code lines on the Google Drive link mentioned below which you can access under Session – 16 folder. Make sure you work through those. Doubts, queries, feedbacks are always welcome 🙂
During the first 20-30 minutes of yesterday’s open lab, we talked about how to merge datasets and filter data using base R and dplyr package. The rest of the open lab were free discussions between participants and instructors.
Thank you to all who showed up!
Welcome to explore the materials I used for the open lab:
In the 15th session of Python Open Labs, this week we looked at some miscellaneous topics and revision of basic concepts of file reading and string handling from previous sessions. We also briefly looked into format strings / format specifiers for string construction in Python. The relevant slides are available on the Session – 15 folder on the google drive link mentioned below.
Today we introduced readr package. It is a package used for reading csv/xls/txt etc. data. It is designed to flexibly parse many types of data found in the wild, while still cleanly failing when data unexpectedly changes.
We covered the functionality of the package and the difference between this package and base R.
Next week we will talk about apply family.
See you next Wednesday from 10 am – 12 pm at DSSC (Lehman Social Science Library Room 215)!
Today we had a brief review session of all the basics of programming that we have covered so far in the Python Open Labs series. During this review we went over reading and writing files, conditional statements, for loops and while loops and various other specifics of programming with Python. This marks a major milestone in the series as all of the material covered so far should be sufficient for basic programming/scripting tasks that you may need
In the next session I will introduce Object Oriented Design with Python.
For those who are getting started with Python, please watch this space for a concise blog post on basics of Python Installation, IDE set up etc in the coming week !
See you next Friday from 1:30 PM – 3:30 PM at DSSC (Room – 215), Lehman Library at Columbia SIPA !
In this session of Python Open Labs we looked at python dictionaries, one of the most powerful data types built into python, optimal for storing in-memory look up tables for fast lookups and search queries.
In the next session we will go through some of the previous concepts as revision and introduce Object Oriented Design with Python.