Scraping headlines with cron jobs, Python, and MongoDB

I’ve been interested in measuring bias in media coverage for quite a while. The discourse before and after the 2016 election has forced a lot of us (definitely me) into an anxiety spiral, trying to keep up and maintain sanity under the weight of our 24-hour news culture. To make myself feel better, I recently set up a cron job on a server that pulls the RSS feeds of as many news sources as I could think of, and stores the headlines in a MongoDB database.

To set up the cron job, I edited my crontab on my Ubuntu server by typing crontab -e. In the crontab I added the following line:

*/15 * * * * /home/news_agg/ >/dev/null 2>&1

This basically tells the server to run a script located at /home/news_agg/ every 15 minutes. The >/dev/null 2>&1 part prevents the cron from writing a message to a log each time it runs.

The news scraping Python script that gets executed every 15 minutes is fairly straightforward. First, feedparser (for parsing RSS feeds), PyMongo (Python wrapper for workin with MongoDB), and the datetime packages are imported and a dictionary of RSS feeds is constructed:

#!/usr/bin/env python
import feedparser
import datetime
import pymongo

#create a dictionary of rss feeds
feeds = dict(
    nyt = r'',
    fox = r'',
    wsj_opinion = r'',
    wsj_business = r'',
    wsj_world = r'',
    wapo_national = r'',
    cnn = r'',
    cnn_us = r'',
    breitbart = r'',
    cnbc = r'',
    abc = r'',
    bbc = r'',
    wired = r'',
    upi = r'',
    reuters = r'',
    usa_today = r'',
    ap = r'',
    npr = r'',
    democracy_now = r'',

From there, we loop through each of the RSS urls, pull out the title and source of each article, and store everything in a temporary data list of dicts. That list then gets written to a MongoDB collection called headlines in a database called news:

#grab the current time
dt = datetime.datetime.utcnow()

data = []
for feed, url in feeds.iteritems():

    rss_parsed = feedparser.parse(url)
    titles = [art['title'] for art in rss_parsed['items']]

    #create dict for each news source
    d = {

# Access the 'headlines' collection in the 'news' database
client = pymongo.MongoClient()
collection =

#insert the data

With the crontab in action, this script is run every 15 minutes which means we have a periodic snap shot of all of the new source’s RSS feeds. This effectively creates a times series of news headlines at a 15-minute time step.

This script was kicked off on June 12, 2017 (about 30 days before the day of this post). Since then, I’ve only scratched the surface with analysis. I’ve also realized that my Mongo document structure is probably pretty awkward, but, hey, it works. As an example, I set up a query to count the amount of times a particular topic was found in each new source’s RSS feed:


Here, I’ve queried the data for headlines including “Yemen”, which returned the following:

	"_id" : "nyt",
	"stories" : [
		"Cholera Spreads as War and Poverty Batter Yemen"
	"count" : 87
	"_id" : "democracy_now",
	"stories" : [
		"Cholera Death Toll Tops 859 in War-Torn Yemen as U.S.-Backed Saudi Assault Continues"
	"count" : 48
	"_id" : "fox",
	"stories" : [
		"Yemenis rally in support for secession of country's south",
		"Naval coalition steps up patrols around Yemen after attacks"
	"count" : 2

This shows that, between June 12 and July 9, 2017, our script found that the RSS feeds of the New York Times, Democracy Now! and Fox News included stories about the war in Yemen 87, 48, and 2 times, respectively. Assuming each RSS feed snap shot was taken at 15 minute intervals, this suggests Yemen coverage could be found on NYT’s feed for almost 22 hours while lasting just 30 minutes on Fox’s feed.

The same query also suggests that aggregate RSS-coverage of Barron and Melania Trump moving into the Whitehouse lasted more than 50 hours across each news source.

Reading Shapefiles into Pandas Dataframes

I’ve just about had it up to here with and ArcMap and arcpy. Today I begin my quest to free myself from ever needing to rely on ESRI for spatial analysis and mapping.

Geopandas seems great, but I have had a lot of trouble getting it installed and have therefore been hesitant to rely on it in any package I create. Instead, I’ve used the following snippet to read a shapefile into a Pandas dataframe for quick analysis. You will need the pyshp package and Pandas. If you don’t have these, install them via pip and you’re ready to go:

import shapefile #the pyshp module
import pandas as pd

#read file, parse out the records and shapes
shapefile_path = r'path/to/shapefile/'
sf = shapefile.Reader(shapefile_path)

#grab the shapefile's field names (omit the first psuedo field)
fields = [x[0] for x in sf.fields][1:]
records = sf.records()
shps = [s.points for s in sf.shapes()]

#write the records into a dataframe
shapefile_dataframe = pd.DataFrame(columns=fields, data=records)

#add the coordinate data to a column called "coords"
shapefile_dataframe = shapefile_dataframe.assign(coords=shps)

Now shapefile_dataframe has all of the input shapefile’s records and geometry data.

Senate Voting Partisanship in 2014

Inspired by this Gist