Categoryfinal reports

Project: Room. Team Promotion’s Final Document


Pub 607

April 11, 2016

Team Promotion

Amaris Bourdeau, Sarah Corsie

David Ly, Alanna McMullen, &  Zoë Tustin


The purpose of this project was to create a podcast and e-newsletter as a means to promote the 2016 Journal of MPub. The podcast and e-newsletter cover different subjects related to publishing that have be addressed in student essays. While podcasts and e-newsletters are largely considered “unsocial” forms of media, we have exploited social media to create a dialogue between users. Our audience, to whom we’re promoting the Journal of MPub, can be found on the web. We therefore found the enablement of this discussion crucial. The promotional plan below will detail our promotional strategy, our financial summary, and the creation of our podcast and e-newsletter, including our SEO strategy and our relationship with iTunes.  

Continue reading

Team DAT!Analysis Project Report

Team DAT!Analysis: Project Report

Content Analysis & Data Visualization of MPub Project Reports and TKBR Essays

Alice Fleerackers, Monica Miller, Josh Oliveira, Alison Strobel, Zoe Wake Hyde

This project was conceived as part of a wider undertaking by the Master of Publishing class of 2015 to explore the creation of a new J​ournal of MPub,​showcasing the work they have produced throughout the year. Our team’s contribution to this was to collect the current work, as well as that of past cohorts, and see what insights could be drawn from them.

Our final report details the process by which our team collected, processed and analyzed three years worth essays from the PUB 800 and PUB 802 courses that are located on the school’s TKBR site, as well as fifteen years worth of project reports from the SFU institutional repository, Summit. Our goal was to gain a deeper understanding of the content generated by SFU’s Master of Publishing students.

Download the full report as a PDF

The Book of MPub 2016

The Book of MPub 2016 is the culmination of our efforts in MPub’s Digital Technology Project.

The idea behind The Book of MPub is the creation of a digital space where SFU’s Master of Publishing students can engage an interested audience with their project deliverables and the goings on of their cohort. We have collectively re-imagined what this web-based project might look like based on the original Book of MPub, published in 2010. We have used different tools and technologies to think through—not build—everything from production workflow to audience analysis.

This is the landing page for The Book of MPub’s Twitterbot. The Twitterbot aims to engage an audience on Twitter that may be interested in the ideas of these future publishers.

#bot: project plan & deliverables

A beautiful project by Katherine, Gillian, and Erik.

We are creating a Twitterbot to engage with people interested in the Canadian publishing industry. Our bot will publicize the work done in MPub—as a stand-in for The Book of MPub, we will use our colleagues’ essays on the TKBR server to engage our audience. We have broken our process into three steps: “The Manual Work,” “The Ideas,” and “Coding and Testing.” A tentative fourth step (time permitting) will involve an audience analysis of our bot’s followers or related networks.

Step One: The Manual Work

The first step involves researching the basics and collecting information. We are currently learning about Javascript, Twitter’s API documentation, and the twit Twitter API client. We are also researching general approaches to building a Twitterbot audience.

Further to this, we brainstormed key topics and hashtags for our Twitterbot, and created a list of important publishers for our bot to be following, based on people we (as individuals, and the existing MPub twitter account) are already following and thus aware of.

We emailed the mpub-15 list for permission to share the essays published on TKBR, identifying key themes of each and tracking responses in a spreadsheet. This will help us to select particular #topics to focus on.

Lastly, we each created Twitter accounts in order to practise our JavaScript skills. An account for our bot was also created separately from these accounts.

Step Two: The Ideas

Next, we brainstormed bot behaviours. These will include:

  • following relevant accounts on the basis of their description (key terms in bio), who their followers are (ie. important publishers), or their behaviour (tweets about a topic)
  • following back people who follow us
  • retweeting particular topics and certain specific #hashtags
  • tweeting @ new followers
  • tweeting out links to new essays posted on TKBR; tweet new podcasts, newsletter
  • tweeting old essays to people who:
    • follow us and then mention the relevant #topic
    • mention a relevant #topic enough times in a certain period of time (tbd)
    • tweet @ us and mention the relevant topic anywhere in tweet

Step Three: Coding & Testing

In this step, we will implement, test, and assess the utility of each bot behaviour. Independently, individual group members will use their own Twitter account to write and test each piece of code. Once we’re confident that it works, we will compile it into a single master document. This document will also include code necessary to keep our bot working within Twitter’s rate/access limits and run smoothly over a longer period of time. Ultimately, we will upload the bot to the TKBR server.

The Deliverables

Our deliverables will include:

  • our rough work (such as completed spreadsheets)
  • our functioning Twitterbot (JavaScript documents) with a number of behaviours
  • a specification sheet detailing bot behaviours and an assessment of their functionality (such as effects on audience growth and engagement)
  • our report, including:
    • our decision-making process for deciding topics and behaviours
    • documentation of our development and learning process
  • time permitting, an audience analysis of our Twitterbot’s followers

Project Plan: DAT!Analysis

By Alison Strobel, Monica Miller, Zoe Wake Hyde, Josh Oliveira and Alice Fleerackers


We at Team DAT!Analysis want to gain a deeper understanding of the content generated by SFU’s Master of Publishing students. We are curious how—or perhaps whether—MPub writing has transformed throughout the years to reflect changing societal and industry trends. We want to know how events such as the rise of social media or the financial crisis of 2008 have altered the kind of things we write about as publishing students and the way we write about them.

We are also curious how the kind of writing published on TKBR differs from that published in SFU’s Summit Research Repository. Are the TKBR posts more blog-like? Are they more closely tied to current events? How does the sentence length in TKBR posts compare to the Project Reports? The TKBR posts also present a unique opportunity to conduct some more detailed analysis using the associated metadata. Does the time of day published affect the sentence length of posts? Are the tags students assign truly effective descriptions of their essay topics? What factors (if any) influence the number of links and sources used throughout a post?

These are just some of the questions we might investigate over the course of our analysis. We hope that the results of our investigation will provide greater insight into what is being written in the publishing program and why—findings that could be used to inform acquisition strategy and site organization for the forthcoming Book of MPub 2016.


To conduct our analysis, our team will have to execute the following steps. For efficiency’s sake, these steps can be executed concurrently (using sample data) by different team members.

Collect & Extract

We will download the past three years of Technology essays from TKBR by using either the WP JSON API or by writing an SQL script to collect directly from the database. Josh and Alison will take the lead on the TKBR scrape.

At the same time, we will collect the 297 PDFs in Summit Research Repository’s Publishing Program – Theses, Dissertations, and other Required Graduate Degree Essays Collection. We will need to write a web scraping script with Python to download the PDFs. We will then use another Python script to download the PDFs and convert them to text files for analysis. Zoe will be heading this step.


While collection is occurring, Monica will investigate how to clean up the data using a small sample of WordPress posts. She will use Google’s OpenRefine to conduct the cleanup of the TKBR essays.


Once the data has been cleaned, we will use MALLET (MAchine Learning for LanguagE Toolkit) to conduct our analysis. We will build identify common topics and examine their evolution over time. We will conduct analyses both between and across our two data sets (i.e. compare TKBR to Project Report data and investigate overall trends within MPub writing in general). Alice will take the lead on exploring MALLET’s capabilities.


We plan to create some data visualizations of our findings using Tableau. This may include building word clouds, stacked bar charts, heat maps and more to chart topic frequency over time and visualize lists of terms that frequently occur together.


We will document our work with frequent screenshots and notes which we will later compile into a formal report. We will structure this final document like a scientific report, with Introduction, Purpose, Methods, Results, and Discussion sections. It will include our most useful data visualizations, outline key findings, and offer some context for understanding our results. For example, we may choose to compare topic evolution in our dataset with the evolution of the same terms on Google NGram Viewer. At this point, we will also assess the usefulness of our analyses within the context of the Book of MPub 2016.  


We will share our process and findings with the class in a fun but informative presentation.


  1. Web scraping script for locating and downloading  PDFs from SFU’s Summit repository
  2. SQL script for extracting text from TKBR posts from database OR steps used to interact with WordPress JSON API
  3. Complete data set: both cleaned and raw data
  4. List / detailed steps for cleaning data in OpenRefine
  5. Overview of MALLET analyses performed and results
  6. Tableau visualizations of key findings
  7. Final Report


  • Mar 18 Proposal due (Alice)
  • Mar 21 Initial research due (all) and data collected (Josh & Alison, Zoe)
    • Each member has explored their assigned tool
    • We will use class to fill each other in on our findings
    • Data is collected and ready to clean
  • Mar 24 Data extracted and cleaned (Monica/all)
    • Data is ready to start MALLET analysis
    • Team has compiled list of questions to investigate
  • Mar 29 Data analysis due (Alice/all)
    • Team meets to discuss findings and conduct data visualizations
    • Start working on final report
  • April 4 Pre-final deliverables due
    • Start working on presentation
    • Revise and finesse report
  • April 11 Final document and presentation due

© 2021 alicef. Unless otherwise noted, all material on this site is licensed under a Creative Commons Attribution 4.0 License.

Theme by Anders Noren

Up ↑