Don’t Mine, its Mine

We always underestimate what we have until we lose it. My location tracking according to Google started in 2016. It is scary to know how much data is collected about you, how your personal information that you once thought nobody knew is all stored somewhere. Data privacy is an issue people are starting to be aware of. A survey conducted in 2016 (see graph ) showed that, globally, over 50% of Internet users were somewhat more concerned or much more concerned about their privacy than in 2015.  This is understandable as more companies like Facebook, Google, and Amazon are using and selling our information without our full awareness. Data privacy is a problem that has been recently identified and actions should be implemented to solve this issue before it escalates thus making it even harder to find a feasible solution. I think at this point we should focus on pushing for transparency as it is unlikely that social media companies will stop collecting our data. If users are at least informed about where their data is going, they can be a bit more in control of it by deciding whether to join the website or share their information with them or not.

The Internet is not what it used to be. In the beginning, we would use it to send and receive information. Privacy was a small concern. Now, Zeynep Tufecki describes the Internet as a surveillance machine. Facebook, one of the main companies that own a lot of user data, collects user data to create a platform for advertisers that will generate billions of dollars. Facebook is not open about this aspect of its business and only discusses its intention to connect people around the world. Does this make us as users angry? Yes! Why? For a lot of us, it is not because Facebook has our data. Let’s be honest, we have been suspicious of  Facebook for a long time. The problem here is transparency; how does Facebook use our data? Facebook has been selling our data to other organizations like Cambridge Analytica, who were using the data for things like the American presidential election without our consent. This made users concerned about what truly happens behind closed doors in companies with access to so much valuable personal information.

Data is a fairly new term that business and people have been recently using but not everyone fully understands it. Those in charge of making laws should be people who are fully aware of how data is collected, how social media platforms work, and how privacy can be breached.  A recent example of how politicians are not informed on the topics they should can be seen in Mark Zuckerberg’s hearing in the U.S. When he was questioned by the US Congress, it was obvious by the kinds of questions some members asked that they did not understand how Facebook worked.

One of the business models I personally admire is Everlane, a clothing brand. They simply focus on being transparent in every step they take in their business where they provide the actual cost and the markup compared to other stores. People appreciated it, loved it and bought their product. Although the Facebook business model cannot be easily changed, maybe transparency can be seen as the first step towards a bigger solution. If users are fully aware of how social media companies process their data and the benefits it has for them, there would not be as much anger and they might be more appreciative. Giving users the opportunity to agree or opt out of having their data collected and sold in exchange for a benefit (for example, it lets Facebook show you relevant content and the service remains free) would allow people to make informed decisions. If someone did not want to have their data collected, Facebook could provide the option of paying a small monthly fee instead. It is important to remember that when a service is free, it is because the user is the product.

Facebook will not stop collecting data; data is now considered as the main reason for business growth.  Therefore, instead of being against it, we should appreciate where we are at now and companies should use it to benefit the users. Laws should be implemented not to get rid of companies’ ability to store our data but so that companies are transparent and users are aware of what is being collected and for what purpose. That way, everyone can provide informed consent rather than being in the dark.

Orwell Would Be Proud: Privacy, Corporations and Data Surveillance

What’s the year? 1984. Not quite, it’s 2019 despite the fact that mega-corporation Facebook is running social experiments, the government is listening, and Amazon is watching. Multi-billion dollar corporations and the government are in bed together, and they’re clearly benefiting from each other and all the information they’ve collected on us. We’ve sold our souls (private data) to the Devil (Facebook, Google, Amazon) for eternal euphoria (funny cat videos). But we agreed to it, right? It isn’t spying if we consent to it, whether we’ve read every word of the terms and conditions or not. Maybe sharing your information with one corporation would be better? Let’s combine multiple platforms and just put all the data collection in a one-stop-shop, as Mark Zuckerberg is proposing. You only need one app, one platform, one secure place. You can communicate with your friends and family, make purchases, share images, whatever you like, and it’s all private (right?). Hey, it’s working for China, so why not North America and the rest of the world.

Worst case scenario? We live in an even more Orwellian future than we do now. One single source of information with one single entity in control who is watching us inside and out. Amazon has developed camera technology which they use in their Amazon Go store that can tell the difference between each product in the store and charge the customer accordingly. The fact that these cameras can tell the difference between a soup can and a bag of trail mix isn’t terrifying, but imagine if that technology advances to the point where it can recognize one person from the next. As per usual Amazon is as opaque as ever about what they plan to do with this technology, and there has been speculation whether they’ll sell it to other companies or not, even though they claim they have no plans to. Oh, wait! They’re already selling facial recognition technology to law enforcement and the US government. Better yet, it’s not fine-tuned which leads to more problems than solutions with racial and gender biases. Can you imagine these cameras on every street, watching every move and reporting back to the government (corporations)? Google already knows where you are, but know they’ll be able to see you too.

Best case scenario? We stand up for our right to privacy and put privacy laws like the General Data Protection Regulation in place, which is a decent start to getting these companies to being more transparent. Whether we like what we see when we actually get to see it is another story, but at least we wouldn’t be blindly consenting (which is the biggest paradox) to the kinds of data collection they’re doing and who they’re giving it to. It’s not like all data collection is bad, and it can feed some algorithms (but not all) that help us with discoverability but we need to take the time to examine the ethics involved in data collection and the predictive analytics and data that result from it. There are concerns of social inequality, discrimination and privacy that data mining brings and that have very real effects outside of the digital world. As a society we need to think more critically of who is controlling the algorithms, the data collection and what they’re doing with it because every corporation has their own motives that they’re not keen on sharing with us.

🎶Don’t Wanna Be an American Idiot 🎶 (looking at you, Congress)

Overall, I am unsurprised by the lack of data privacy online. I’ve known for a while now that something is tracking what I’m doing as I do it, whether it be Google, Facebook, or Apple. However, it is a bit frightening to see it all laid out in places like Dylan Curran’s twitter feed and to see how google maps tracks our movements throughout the day. What frightens me more than either of these things is what unregulated entities might do with that data on a personal and political scale.

Although I would like to believe the government is attempting to regulate big businesses like Facebook and Google, every day we see that they are focusing on the wrong things. In the Google Congressional Hearing, held on December 11th, 2018, the American Congress had the change to question google on how it abuses data privacy and its way of handling that data after compiling it. Instead of doing that, however, the members of congress decided to focus on things that had nothing to do with privacy and everything to do with the more self explanatory algorithms almost anyone under 50 can understand (Lapawowsky, Congress).

Footage of me watching congress date itself to the age of the dinosaurs

This not only proved that Congress is incredibly out of touch (watch this video for evidence- these congress people are ridiculously embarrassing) but that the government in general is focused on only the superficial issues surround tech giants because they do not understand the more pressing matters. Not to mention, the big companies do not want regulation and we know that big companies have a big stake in government, regardless of what people say.

We’ve seen how companies like Facebook an influence political situation through the 2016 election, with the Cambridge Analytica Scandal. But on a more personal note, a lot of these companies gather data about buying habits that can negatively impact people on a day to day basis. In this case, I will refer to the experience of Gillian Brockell, a woman who continued to receive ads as though she gave birth to a baby after delivering a stillborn child (Kindelan, Woman).

She posted on twitter, stating;

“Please, Tech Companies, I implore you: If your algorithms are smart enough to realize that I was pregnant, or that I’ve given birth, then surely they can be smart enough to realize that my baby died, and advertise to me accordingly — or maybe, just maybe, not at all […] We never asked for the pregnancy or parenting ads to be turned on; these tech companies triggered that on their own, based on information we shared. So what I’m asking is that there be similar triggers to turn this stuff off on its own, based on information we’ve shared…” (Kindelan).

This is just the tip of the iceberg on the way that data mining infringes on privacy. Situations like the Google hearing and like Brockell’s situation (in which I doubt much has been done to change the algorithm, despite public outcry) make me doubt that any government backed venture or internal change is likely to happen any time soon. Until then, I’m just going to accept that I have to be careful with my searches and try to limit what I put online.


Work Cited

Kindelan, Katie. “Woman Demands Change from Tech Sites like Facebook, Instagram after Receiving Parenting Ads after Stillbirth.” ABC News. December 13, 2018. Accessed March 13, 2019.

Lapowsky, Issie. “Congress Blew Its Hearing With Google CEO Sundar Pichai.” Wired. December 11, 2018. Accessed March 13, 2019.

All Hands on Deck: Government Intervention in Data Privacy

Capitalism is so embedded in the way in which our modern North American society operates, impacting all of the transactions and interactions that we have with companies. Big corporations worth billions of dollars have such an incredibly strong sway in what happens in the marketplace, that it seems nearly impossible for an individual or small group to lobby and influence how they do business. In order to gain hold of our data privacy and stop the momentum of surveillance capitalism, change will need to happen at the institutional level. We need to get the government involved.

The data privacy issue continues to grow as more and more details come out about the seemingly endless data that is able to be mined about us right down to our exact travel path on a daily basis (plus our search history, files of all kinds from texts, photos and voice messages, and the list goes on). Unfortunately, I am not the slightest bit surprised when confronted with the amount of information that tech giants like Google and Facebook collect about us. The technology that we use in our daily lives (phones, smart watches, apps, social media platforms etc.) is so interconnected, easily trackable and constantly backed up to servers. We appreciate these services when they help us access information that we want to store like our emails and anything we choose to put into the cloud like documents and photos. We also want instant access to the data of our friends and family (and sometimes even strangers) through our social media accounts and we willingly input data into these services on a daily basis. Our input helps these tech companies create ever more robust platforms that continually learn more and more about us.

What we are much less comfortable with is the data that we don’t see and how that data is ultimately being used. For the most part, our data is being used for capital gains. When it comes to data collection, I believe it’s important to remember that we as users are not really the ultimate customers of services like Facebook and Google. Yes, they have to deliver on some promises in order for people still want to use their services, but ultimately these tech giants are serving the needs of advertisers rather than the readers, browsers and users of their platforms. The bigger they get the more advertising dollars they can bring in.

The tech giants are out to dominate their industries and claim the lion’s share of their markets and they do so by cashing in on more new tech. Giant corporations scoop up new ways of gathering data and tracking users by investing in their own research and development or by buying smaller tech startups (see a list of acquisitions that Facebook has made here) who have tapped into something of interest. Because of their sheer financial power to dominate over other businesses and bully the market, the government is required to step in. 

It is quite interesting to note that even Mark Zuckerberg himself feels that it’s important for data to be regulated, but the big issue remains, how? There are a few examples of cases where the the government has stepped in, such as the California Consumer Privacy Act which was passed in 2018. The three major tenants are:

1. You will have the right to know what information large corporations are collecting about you.
2. You will have the right to tell a business not to share or sell your personal information.
3. You will have the right to protections against businesses which do not uphold the value of your privacy.”

It’s hard to tell presently how well this is working in the state of California, but it shows that passing this type of law is something that people are very interested in doing (even if the big tech giants strongly opposed the bill). But it is these tech giants, and their seemingly unlimited funds, who need to be stopped and the government can’t let them just throw bunch of money around to try to stop the regulations.

We still have a lot of work to do in Canada as the Privacy Commissioner stated that they don’t have the funding they need to adequately protect Canadians against privacy issues. We as citizens need to get more involved to keep pushing our law makers. A new privacy law now ensures that Canadian companies have to let their customers know when their data has been leaked, but what recourse do we have once it’s been leaked? That clearly isn’t good enough.

It’s very easy to feel disenfranchised when you see that corporate giants like Amazon are buddies with the government bodies like the Department of Justice for example, but it is still important that we continue to push law makers for better protection. In reference to this Mike Shatzkin article (via, SFU Master of Publishing student Jaiden Dembo stated “If law can be put in place to help these behemoths grow and dominate the market, then the opposite can be true as well.” Though there is a lot of muddy water to sift through when it comes to data protection and change will take time, it’s something that’s worth fighting for.


I have no data to hide, do you?

It shouldn’t be a huge surprise that the internet lacks data privacy, despite the top tech companies saying that they will implement better security and privacy, like Mark Zuckerberg’s new vision of an “a privacy-focused messaging and social networking platform where people can communicate securely”, or the US government’s initiative of establishing better antitrust laws, like Elizabeth Warren’s presidential campaign proposal to dismantle the biggest tech companies, Facebook, Google, Apple, Amazon, and forcing them to separate and restrict major mergers. I walked into this idea of data privacy with a popular mindset: I have nothing to hide, so why should I be afraid if someone has the balls to hack and expose me. I still struggle to believe that a place like the internet can be a private place, and can’t help but reflect that as much as we don’t like these big tech companies stealing our data, it is like a paradox. We, as users of the technology, don’t want them stealing our data or sometimes having our data at all, but we still contribute to this big capitalistic system by using their technology. In order to benefit technology as a whole, data is required to make better products for our needs. Could it be for the greater good? I agree that when data is taking from us without our permission, we, as users, can feel a mistrust with the tech company. As Avvai shared in her blog post, “Facebook’s new privacy plan might not actually be helping us out” it’s not about not wanting using technology at all for the best form of privacy. They can be “really useful tools. We just don’t want it being shared without informed consent.” 

Businesses try to gain as much information about us as possible so they can gain the upper hand from their competition and create products that best tailor to our consumer demands. I feel like a lot of people are aware of this issue, ever since the circulation of government surveillance ideals from George Orwell’s 1984. This leads me to believe that there isn’t such a thing as privacy within a public sphere; there can’t be. If you truly don’t want someone exposing you or knowing something about you, then your best chances are living with a dead person.

I came across this article by Thomson Reuters Foundation that suggests future cities exist by data-driven sustainability. In the article, Toronto is described as a “smart city”, where future developments or enhancements to the city would be made by installing digital systems in public/private spaces to record data of what inhabitants do with their garbage, water, and power. However, in a recent survey from McMaster University, 88% of Canadians state that they are extremely concerned about their privacy, and 23% of them are “extremely concerned.” This makes me reflect that it’s not so much about educating the public on data privacy; a lot of people are more than aware that it is an issue. It’s understanding what we, as tech users, should do to become better equipped with our data and to gain agency and authority to not let big tech companies steal the information without our permission. Tech companies have become so dependent on our data. Could there even be another way around this? Without data, how could we see the improvement to any innovative endeavour within the technology in our lives? Or in a city, we can live in like Toronto. Geoff Cape from Future Cities Canada shares that “despite the privacy concerns, effective data use is crucial for combatting the environmental challenges cities face and making them better places to live for growing populations.” Tech companies have become so dominant in our society, I’m not convinced that a proposal like Elizabeth Warren’s can save us now. We’re in too deep.

If the potential for data privacy breach is the enemy, awareness is your weapon

Undeniably the issue of data privacy has become an increasingly important issues after numerous data breaches throughout the 21st century,  the scandal around Trumps election, and the growing concern surrounding Facebook and its collection/use of personal information. Data being everywhere yet many of us don’t really know its hidden value! We as a society value personal privacy, but we often fail to think twice about our privacy in an every growing digital world. For me growing up, the idea that whatever you put online will can never really be delete has stuck. Whether this was true or not, it framed the way I interacted with the internet and determining what information I put out there myself. The starkly twitter thread by Dylan Curran scarily shows us that boy, oh boy, that it’s true.

With almost all of North America actively engaging with the internet and children starting at a very young age, the question becomes why isn’t there greater awareness of data privacy? For me personally, internet safety was not an issue that was fully addressed within my own education experience.  While reviewing the BC educational curriculum, it is unclear whether data privacy is something being taught (or to what extent) in the classrooms. Under digital literacy the following content is the intended teaching outcome:

Internet safety

  • digital self-image, citizenship, relationships, and communication
  • legal and ethical considerations, including creative credit and copyright, and cyberbullying
  • methods for personal media management
  • search techniques, how search results are selected and ranked, and criteria for evaluating search results
  • strategies to identify personal learning networks

As children are engaging with technology at a much earlier age, schools should be doing more to educate students as they are probably one of the most vulnerable and soon enough the target market. While the current curriculum does address some of the topics, it might be helpful to have a better understanding of privacy policies and settings that can protect you and your network of friends. As for the rest of us, it would be helpful to become self-aware of data privacy issues. We could probably start by reading the terms and conditions of the sites we are engaging with. It is refreshing to know that it is something that is starting to be discussed on a political scale with Elizabeth Warren proposing to break up large companies such as Facebook in her presidential campaign. While other candidates sharing the same/similar sentiments.

Facebook’s new privacy plan might not actually be helping us out

This week Mark Zuckerberg announced on his blog a new vision for Facebook, social media, and the web.  He wants to build a messaging platform that’s privacy-focused. He dives into the seven principles he wants to enforce: private interactions, encryption, reducing permanence, safety, interoperability, and secure data storage. He compares this space to a ‘private living room’ compared to the ‘town square’ approach to social media.

The Guardian response to Zuckerberg’s post illuminates that this would be done by integrating the messaging systems of Instagram, WhatsApp, and Messenger.

I think there are two problems with this:

  1. Integration
    I can see the appeal of integrating the different messaging systems into an all-in-one platform. You don’t have to waste time checking multiple apps, you won’t have to worry about which platform to message a friend, etc.   Personally, I would find this annoying as I use these apps for various purposes and check them at differing regularity. I don’t necessarily want the be seeing messages all the time from the various different networks of these apps.However, aside from my personal views,  I think this new move allows Facebook to be an even more powerful factor in our lives. It wants to curate and shape our living room / private space as well. Not only that there are still problems that can occur in these so-called “private spaces”. For example, India will be having it’s presidential elections this year and it’s been dubbed the “Whatsapp elections”. Whatsapp is highly popular in India, and political parties have been recruiting these “cell phone volunteers” to create neighborhood Whatsapp groups to spread biased information around. The same issue with Facebook and the spread of misinformation can still occur on private messaging platforms like Whatsapp. According to the news article “The misuse of WhatsApp has been connected with at least 30 incidents of murder and lynching, for example following the circulation of children abduction rumors.”
  2. Ignoring the original problem 
    In his blog, Zuckerberg starts off his piece by saying:

    “Over the last 15 years, Facebook and Instagram have helped people connect with friends, communities, and interests in the digital equivalent of a town square. But people increasingly also want to connect privately in the digital equivalent of the living room.”

    Sure, we want that! But judging from the reaction of the Cambridge Analytica scandal what people really want is their private data not to be sold to advertisers without our informed consent. It seems like Zuckerberg is ignoring the problem (or perhaps just trying to shift our focus) of Facebook’s data-surveillance business model and trying to grow and expand his already massive business by implementing a new platform. Data that was supposed to be only shared with our friends and family and people we chose to be on our Friends List was sold to third-party advertisers. How is creating a private messaging system going to solve that issue?

    Facebook is not getting rid of the newsfeed… which I don’t think people want anyway. I think we still want to share things to a wide range of people. We just don’t want Facebook sharing our private data from our private profiles and from our apps. For example, Sophie shared an article with us about how apps like a menstrual-cycle tracking app and a heart rate app are sharing the data with Facebook who in turn sells this information to advertisers.  We don’t want to stop using these apps – they can be really useful tools. We just don’t want it being shared without informed consent.

Overall, I think Mark Zuckerberg is not addressing the problem the public is criticizing him with and instead introducing new growth models for Facebook. I’m not sure if we’re gaining anything from this new policy move. Zuckerberg is obviously a smart guy. My personal thoughts are that he’s very aware of our growing fear around sharing information in public spaces now. He might also be forecasting a decline in using public spaces like Facebook and Instagram as more and more of the public gets to understand the data privacy issues. Therefore to keep his business growing, he’s trying to expand his services into the private communication sphere because until we become telepathic we’re still very much dependent on communicating with one another through technology.

Facebook’s World Domination Over Our Data

In 1999, Scott McNealy, then-CEO of Sun Microsystems, famously declared, “You have zero privacy now anyway. Get over it.” Google CEO Eric Schmidt warned that “if you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.” Mark Zuckerberg, the world’s sixth richest man, decided that privacy was no longer a social norm, “and so we just went for it,” while Alexander Nix, of the data firm Cambridge Analytica — famously employed by both the Brexit and Trump campaigns — brags that his company “profiled the personality of every single adult in the United States of America.” —Samuel Earl, 2017

Continue reading “Facebook’s World Domination Over Our Data”

Data Privacy 101: An Introduction to Surveillance Capitalism

The issue of data privacy is of central importance in the modern age, and, given the business models that now depend on metrics gathered via surveillance, it doesn’t seem that it will change in the near future. Furthermore,  much of people’s discomfort around data gathering seems to stem from the lack of transparency and knowledge about what data is gathered and stored, and how that data is used. As a result, and, influenced by education that I received regarding sharing on social media, I do think that education about this issue should be built into curriculums, and that it could be spearheaded by the government.

Often times corporations argue that users have agreed to have their data monitored and collected, however the terms by which users agree to this are invariably written in legalese and buried deep in long contracts that users have gotten used to skimming or ignoring completely because they are so long and often impenetrable. Often, I think, even if users did read the entire document, they wouldn’t fully understand what was being communicated or what they were agreeing to.

If the issue is a lack of understanding and knowledge about data collection and use, then the method of redress should aim to demystify and make transparent the issue of data collection and use. The problem is that, as surveillance capitalism becomes more and more commonplace, and the methods by which data is gathered, and—in fact—the data gathered become more and more extensive, we can’t expect private companies who stand to profit under this system to educate people. It would be great if they did, but they stand to gain too much from people remaining uneducated.

For this reason, I actually think the government could and should assume the responsibility of educating people about data collection and privacy. When I was in high school, we had a number of assemblies and lectures about what sort of information we were sharing on social media. It was framed as a matter of safety, and also from the perspective that nothing that was shared could ever really truly be deleted or taken back.

In a lot of ways, a conversation about data collection is an extension of this issue—essentially, it is still a matter of privacy. The difference is that the lessons I was taught in high school were about information and content I was choosing to share, whereas the conversations we need to be having now are about information that is being collected without our knowledge.

I think that educating people about how their data is collected and used is essential to people being able to make informed decisions about their digital lives. Furthermore, the current structures in place for doing this (Terms and Conditions documents, etc.) are not accomplishing this, (probably because ignorance of this matter is actually in corporations’ best interest.) Therefore, the government should intervene and build education about data privacy into curriculums. It should be something that becomes a basic part of peoples’ consciousness, as digital technology is increasingly becoming intertwined with peoples’ daily lives, and surveillance capitalism may be here to stay.

Loving Big Brother: What Facebook’s Recent Business Decisions Say About Its Vision for the Future Web

Facebook’s Mark Zuckerberg is apparently shifting the company’s focus to users’ privacy. In a blog post, Zuckerberg wrote that Facebook “plans to integrate Instagram, WhatsApp and Messenger so that people can communicate privately and directly across networks”. These communications would be fully encrypted, preventing anyone—even Facebook—from seeing the content shared on their services.

While this sounds fantastic in theory, especially in light of the Cambridge Analytica scandal early last year, we should wait before cracking open the champagne… Zuckerberg’s new vision for the Internet is, in many ways, more vulnerable to an Orwellian future than our current one reliant on a model of near constant surveillance.

We’ve talked a lot about what the Web used to be and what it has become. We’ve also talked about nostalgia for the days when it was a community which functioned on the dissemination and sharing of ideas rather than a commercial marketplace. One thing I remember distinctly about these conversations was a warning against the Stream and a platform-based Internet—every author we read advocated for a diverse Web with an equally diverse array of websites and voices.

This is something I could not stop thinking about while reading Isaac’s article.

This is a good time to mention that I’m just as creeped out by Facebook’s surveillance as the next person. I’m not comfortable with the company collecting my data and selling it to the highest bidder, and I’m not comfortable with its algorithms inferring my sexuality and interests based on my friends and online behaviour. What I’m even less comfortable with, however, is the already arguably more homogenous Internet becoming a monopoly of a handful of platforms—and I honestly cannot think of anything more terrifying than the general experience of the Internet becoming a single app.

This is essentially what Zuckerberg is advocating for. His decision to merge Facebook, Instagram, and WhatsApp as well as create a private and secure messaging service is the first step of this, no doubt consciously mimicking China’s incredibly popular app, WeChat. WeChat is a one-stop shop for everything from investing to takeout to intimate conversation—and something the government surveils vigorously in order to collect information on its citizens. Though Facebook promises end-to-end encryption and assures us they will not be hosting their data in countries with questionable human rights reputations, I trust them about as far as I can throw them. Furthermore, even if they do fulfill all these promises, the merging of all three apps—and we can’t forget its newly released answer to Patreon—is a very deliberate step in the direction of a future Internet that resembles an Intranet more than anything else. Why use any other service or website when you can do it all in one place? Though incredibly convenient, it will also put everyone’s data in the hands of one company. And what then? What happens when Facebook is no longer content to keep their private data encrypted? Or when they decide to share it with the government in the interest of safety and security?

I’m also incredibly curious as to how Facebook plans to stop the spread of misinformation if it won’t be looking at what’s being shared on its new messaging service. AI can only moderate to a certain degree; will their algorithms be fact-checking? If so, this presents a whole slew of other problems in terms of how we use language to communicate (will their AI understand hyperbole? Sarcasm?). Will the service be integrated into WhatsApp? Will their public data continue to be sold to the highest bidder? Will their encryption truly prevent even Facebook from seeing users’ content? How am I supposed to trust any of this when Facebook is still beholden to its real clients: its investors? Needless to say, I am made of questions… but according to Isaac’s article, Zuckerberg is tight-lipped about the entire project.

All I’ll say is this: I would much rather have my data gathered, anonymized and then sold before I see Facebook create a monster app that takes over the Web. I am not comfortable with using the same service and company for all my needs; it kills any and all innovation in the market… and I question anyone who doesn’t see Facebook’s recent business decisions as a move towards a singular, app-based experience of the Internet. If a government were taking these steps, people would be alarmed. Why not for a private company who is beholden only to its investors? Whose mission is to protect its own interests over anything else?

Conglomerates and monopolies put great amounts of power and wealth in the hands of a select few. When paired with the Internet, this can turn a tool that was previously used to freely disseminate a wide range of ideas into a singular point of view very easily, especially when those most vulnerable in a society do not have the luxury of boycotting it.