Introduction

PUB607 is the MPub program’s Publishing Technology Project course. In Spring 2015, the course will be run as a 5-week project, to follow on the PUB606 magazine project. This course aims to provide a context in which MPub students can:

  • gain hands-on experience researching and developing a range of
    digital technologies representing the state-of-the-art;
  • gain experience working on a decent-sized IT project full of the kind of ambiguities and unknowns that typically characterize such projects;
  • experiment with new technologies without serious (business) consequences.

Unlike the Fall 2014 edition of PUB607, your evaluations are primarily based on your final output, not on the documentation of those experiences. You will still begin with documentation through a proposed plan, proceed by documenting what you are doing and why, and end with a final product or products and a brief reports on what you have achieved.

Groups

You will self-organize into groups through a guided process in the classroom. Groups can be as small as two people, but no bigger than 5. Multiple groups can tackle the same project and even collaborate among groups, as long as contributions from each team are properly acknowledged.

Groups will be given a collective grade, but this grade may be adjusted based on individual contributions. To this end, each student will send an anonymous memo to the instructor outlining each member’s contributions including their own.

Projects

Students and groups can pick from the list below or propose their own project idea. Students will take on an R & D project that involves building something (a tool or otherwise)using a preexisting technology extensively or in a novel way, or analyzing a technical challenge and proposing a solution. These projects will all necessarily involve a research component. Whatever the final deliverable, it must also be accompanied by a >1000 word report describing your process, and accomplishments.

Build an efficient workflow for publishing in multiple formats: Take one of your magazine project concepts and envision what an efficient production process might look like. How do contributions get sent to the magazine? Who reviews and edits it? How does the piece get laid out? What formats is it published in? Think through the entire workflow, and then build and automate as much of that workflow as you have time for.

You will be evaluated on:

  • The quality of the analysis of requirements
  • Identifying suitable tools for constructing the workflow
  • The working prototype/example of the workflow in action
  • The quality (presentation) of the final output in multiple formats

Build mockups for a UI to edit XML documents: Take an imperfect XML-version of a research article and imagine a UI for how to edit and correct the markup so that the final is complete and free of errors. Given a list of common errors, you must develop a visual representation that is intuitive, efficient, and elegant for manually correcting errors as quickly as possible.

You will be evaluated on:

  • The feasibility of your design (i.e., is it possible to create it given the underlying data)
  • The completeness of your proposed design (i.e., how many types of corrections does your solution allow us to solve)
  • The way in which you represent your solution (i.e., by using appropriate tools for drawing mockups)
  • The thoughtfulness of the user experience
  • BONUS: if you present a visual style guide (how it should look, not just how it should behave)

Data Analysis with Google Refine and APIs: Pick a dataset and an API of your choice (Twitter, VPL, Biblioshare, CrossRef, etc.) and combine them using Google Refine. Clean and manipulate your data for analysis. The complexity/messiness of your data will be taken into account.

You will be evaluated on:

  • The value of the analysis you carry out
  • The number of different types of data manipulations that you carry out
  • The number of different tools you successfully employ in your analysis
  • How you present the results

Twitterbot: Write a program that runs on a server somewhere that interacts with Twitter without the need for manual intervention. The bot (or bots) must have different objectives that it must carry out.

You will be evaluated on:

  • Your demonstrated understanding of the potential of Twitterbots
  • The success of your bot at carrying out the proposed tasks
  • The complexity/sophistication of your bot’s actions
  • BONUS: if you carry out an analysis of different bot behaviours

Google Analytics: Take the a known Website (for example, http://publishing.sfu.ca or http://pkp.sfu.ca) and perform and in-depth analysis of the usage metrics from Google Analytics. You will be expected to come up with key insights on the use of the property and suggestions for improvements. The Analytics Academy may be useful to get started.

You will be evaluated on:

  • The insights you are able to derive from the analytics
  • The extent to which to take advantage of what Google Analytics has to offer
  • The use of outside tools and approaches (i.e., newsletter,  altmetrics) in combination with the analytics
  • BONUS: if you combine analytics from Twitter and Facebook with the Google Analytics.

Evaluation

Evaluation will be primarily based on the project outcomes (deliverables) which must be negotiated with the instructor after week 1 of the project. Completion and quality of the deliverables will make up 70% of the grade, and the remaining 30% of the grade will be based on the project report.