Pub 802 Reflection

As I sit here at home, munching on an apple, reflecting on my experience in Pub 802, trying to boil an entire semester down into a couple of talking points, I realize how hard it is to put my take away into words. So I’m going to try to speak to it through what I liked and didn’t like about the structure of the class.

In regards to the structure, for the most part I thoroughly appreciated the unusual way of doing things. Student-led syllabus and discussions, readings and blog topics worked out to let us have more say in what we wanted to learn. It was a unique way to give us students more agency, and therefore feel more engaged, in the course content. What I found to be the best aspect of the course structure was the annotated readings. I do disagree with how it factored into marks–mainly that there is an arbitrary requirement to comment enough times to get a satisfactory for a week, which to me encourages superficial engagement with a topic but doesn’t actually represent the time or thought put into a reading–however I found it to be the most helpful portion of the course. I’m a slow thinker and generally spend a lot of time doing something that most people do at a fraction of my time, including putting my thoughts into words. The online annotations allowed me to participate in discussion around the readings in a capacity I cannot achieve in seminar discussions. It allowed me to read, reread, comment, reply, and think about what external source to bring into the conversation. It also played into the psychology of social media notifications through emails about replies, which made it kind of exciting to participate and leave a comment.

What I didn’t like, and mostly because I found it more difficult to engage with or care about, were the weekly blog posts. I’m the type of person to prefer few big projects over consistent small projects, as I have a binge-work ethic. The requirement to write a bunch of small blog posts meant I had to force myself to write for a topic I didn’t care about, which I never think is a good thing, or force myself to write short for a topic I had a lot to say about, which I also don’t agree with. Both result in me feeling like I just submitted a rushed work for the sake of submitting. Factor in the fluctuating workload of other courses throughout the term and the “consistent” workload of 802 became more and more of a burden. I would have much preferred a system of choosing fewer topics from all available and being able to write a more detailed blog post. While I understand that that would create an imbalance in the amount of blog posts each peer would be saddled with to give feedback on, I ultimately think that would be a more productive form of learning, at least for me. (What if students signed up for topics much like we signed up for weeks to lead the class, in order to balance the blog post to response ratio workload?) Even as I read the blog posts for my week that I need to give feedback on, a lot of them read like they are just going through the motions to answer the prompt. Perhaps that is also because it is the last week, too, but I also think it is because they are too short for students to go into enough depth, and the fact I’m reading through 10+ of these short bursts of thought, and it leaves me feeling like, as the feedbacker, I’m just saying the same things over and over again on each post. “Oh this is your opinion? Did you think about this part, though?”. In summation: I do not feel engaged or that my time is being used productively. (Note that this is what it feels like and is not a reflection of the actual quality of the blog posts).

I have spent most of my words on the structure and how it worked for me because I honestly do not think the content of this course changed my idea of the role of technology in our lives. That is not to say the class was not valuable, as it certainly deepened my understanding, but I have not come out of the class with a different approach to my future than before the class. No, I will not be taking away new information about the ways in which technology is changing our society and blurring the roles of the people within. My relationship with technology has not changed because of this course, my opinion on tracking has not changed. What I will be taking away are questions, thinking points about the implications of technology that will, in the future, continue to influence my changing understanding of the effect of technology on society. A couple years down the line I will look back and be like: “You know what? I wouldn’t have had this current perspective on technology without those thinking points given to me in that pub802 course so long ago….”

Let’s get more digital content goin

Publishers are stuck in the age of print, and are trying to force the digital environment to conform around print standards. While ereaders are great for their portability and convenience, for a lot of readers, there’s not enough that’s different to draw them away from print. As Hachette Group CEO Arnaud Nourry put it, “The ebook is a stupid product. It is exactly the same as print, except it’s electronic.

I half agree. While there can be much more done, ereaders are still in their infancy, and will grow to incorporate enough features to be worthy as a whole new media-consumption tool, separate from books. Some features I would like to see added to ereaders include:

  • Audiobook Incorporated with text
    • In which audiobooks also come with the text of the book, and a highlight follows the words of the text currently being read. This is for accessibility and further reading assistance for those with greater barriers to reading.
  • Pop-up glossary
    • The ability to highlight a word and have definitions appear in a hoverscreen. It would look much like how hovering your mouse over a hyperlink in wikipedia opens a small hoverwindow with a glimpse into that hyperlink’s page.
  • Annotations
    • This should be obvious.
  • This.

However, it’s not realistic for a publisher to just make a better ereader. Instead, there are other digital content strategies a publisher can adopt. Talking about digital content actually makes me reminisce a lot on book project last term. My group spent a long time devising how to include digital content in our publishing plan, and what we came up with is exactly what I would like to see done in the real world.

So without further ado, I’ll take a cue from CuePub.

One thing I learned during the book project exercise is how much variety there is to possible digital content. We managed to come up with four unique ways to use digital content to enhance the four books, rather than just port them to a digital platform.

One of my favourites was what we did for the graphic novel – we envisioned an environment for fans to create and upload their own fan stories. As a publisher, helping and encouraging communities, especially for serialized publications, helps grow and strengthen the fan base. If you have a series that inspires strong attachment to character, a series that people will write fanfiction for and upload somewhere else anyway, why not host the community yourself and encourage their attachment to the series?

However, what I most want incorporated in a publishers’ business plan is not digital content to complement a printed book, but digital books that are a completely separate catalogue from printed books. What penguin is doing in india with mini-books for mobile is genius. Finding ways to create digital-only content, to neither be secondary to nor replace the print book, is something more publishers should be doing.

Invasive Tracking – Is it so bad?

Digital Tracking and, correspondingly, the Big Data it produces is like every other technology in this world, including books: it can be used to the benefit or detriment of humanity. There are huge ethical considerations about what use and how much of it is appropriate, and I myself am a bit torn on the subject. The vast amounts of data collected can be used to better understand human psychology, perhaps at a scale that traditional experimental methods cannot accomplish, and this knowledge can be utilized in different ways. On one hand you have the Cambridge Analytica case, showcasing how this data can be used to manipulate people at a societal level with huge consequences. On the other hand you can, for example, take the results of this controversial Facebook experiment, wherein people’s social feeds were manipulated to see how it affected their emotional levels, and use it to create a happy user-base–by using the findings, reducing the negative to positive content ratio on peoples’ feeds, and improving their emotional health (to whatever extent it can). On the other other hand, that same data from that same experiment can be carefully implemented by Facebook to control peoples’ emotions (to whatever extent it can) towards some sinister end goal.

Data tracking has the potential to be used for more than just capitalism and marketing; it can be used to better understand human behaviour, and I do not think there should be an imposed limit on what kind of tracking can take place – so long as it is all transparent, honest, and consensual. I think of the internet as a shopping mall, and Facebook, or any other website, a storefront–If you are entering somebody’s website (if you are entering somebody’s store), they have the right to know and understand who their customer base is, they have the right to know a little bit about you. In a physical store, they can know this from physical cues (the owner sees you enter the store. Maybe you’re wearing a shirt that says something, or maybe you go directly to a specific section to browse. It gives cues of your interest), or from social cues (the owner strikes up a conversation with you to find out what you like, to be able to make a recommendation for you). There might be a loyalty rewards program, tracking your purchases to understand your likes and tailor recommendations to you. Online just has different ways of tracking your behaviour and the potential to generate a lot of data from its tracking automatically.

For me, unethical comes in at the use stage. Once all of this data is acquired, it can be used to better serve the customer, the patron, the person regularly visiting your site, etc. But it can be used in terrible ways – sold to other corporations, weaponized to manipulate people at a societal level, etc. When identity becomes a commodity, data has gone just a touch too far.

When your online behaviour on one website affects how another website responds to your browser, then I think there’s a problem. Much like the pizza demonstration from Ghostery, it’s a little unsettling to have your information spread without your consent.

To get back to the question, now that I’ve laid out my stance, I’ll relate this to the publishing industry. Publishers, platforms, distributers, etc, have the right to collect information on their customers. The Jellybooks example of tracking reader behaviour in ebooks. But, say, if my reading habits on my ebook started to influence advertisements I see on my laptop chrome browser, then there is unethical and, what I think should be illegal, distribution of that information without my express permission.

As a data collector, the person or corporation collecting the data should be responsible and held accountable for the data collected. But they can certainly collect data to help them better serve the customer, if they so choose, and if they are transparent and receive consent for their use of it.

Extreme Data Capture

I’m going to make a possibly bold statement here: I do not care about nor want to collect data on readers’ impressions of books. Which, I realize, from a publishing-as-a-business standpoint is maybe not very smart, but aside from general reception based on reviews, I do not want to know how readers react to or interpret a book. I think data on how readers discover books is more important and that is data I would be interested in for marketing purposes, but knowing too much about reader impressions will have an effect on editorial decisions, and that’s just something I’m not willing to negotiate.

However, for the purpose of this post, I’m going to propose this form of data capture: a camera in e-readers with facial recognition and eye-tracking and heat-vision capabilities that can capture a reader’s emotional response from physical signs (facial expressions, pupil dilation, cheek flushing, whatever other signs for emotional responses there are) and match it up to the specific passage being read while that response takes place, using the eye-tracking. Sounds expensive, yes, and more of an invasion of privacy, but this is a purely imaginative piece.

Using this patent-pending Emotional Response Reader technology, coupled with AI to sort through the data, the data analyzer (whether that’s the publisher or amazon or whoever) would be able to study such things as passages that garnered the most (blank) emotional responses, or sections that left readers bored or confused, and other such details that would help the writer/editor/publisher better understand parameters such as sentence construction, flow, and narrative structure that works for a particular audience.  It could also construct graphs of emotional changes over the course of the novel. Armed with this data, the publisher could better select books, the editor can better edit books, and the writer can better write books for an audience they know the book will sell to.

This will, I believe, cause more homogenizing of literature than there already is from trend-based publishing, but if used sparingly, the publisher could use it in trying to craft the bestseller that helps fund other publishing projects.

This would also create valuable datasets for other AI. Recommendation AI could suggest a book to a reader based on the suggested book’s emotional response data being similar to another book the reader liked. Writing AI could use the data in composing new works. Selection AI could more accurately select manuscripts for publishers to consider, so on and so forth. (“Better” being subjective to a particular publisher’s interest).

I do not think this form of data capture is very feasible though, as people would be very reluctant to allow this kind of behaviour tracking (I would hope). I mean, suspicions of spying through webcams have become high enough that tape over a laptop camera is not an uncommon sight, so I do not think society would accept this technology in e-readers.

Which is a good thing.

Eh, I think it’s pretty cool

Theories of the existence of artificial beings with human-level or more intelligence  have been part of some cultural canons for millennia –  greek myths, for example, spoke of the existence of artifical beings coming to life (though those usually were by means of divine intervention rather than mathematical processes). Since then, AI has become a very scientific idea that tries to use our understanding of how the brain works to reproduce it in mechanical terms. And the idea of AI being so human-like has spurned many, many interpretations (novels, movies, etc) of a future where artificial beings – of organic or inorganic material – roam the world with us, and think for themselves, and become the biggest threat to the human way of life.

While the fear of a singularity or an AI takeover has been an undercurrent in parts of society for a long time now, today that fear seems to be taking a more tangible form as actual AIs are being constantly developed, upgraded, released, and shown to be able to handle more and more human-like tasks (that AI that can paint).

The fact that AI is proving itself to be, maybe not everything, but some things it’s chalked up to be makes some of those fears justified. It’s the industrial revolution all over again – what’s going to happen to people’s jobs? Especially now that it’s not just production and transportation and other practical jobs being threatened, but creative jobs as well!

But jobs are one thing, AI also has the potential to take over positions of power (I mean if people will vote animals as mayor, It’s not farfetched to see an AI being voted in as mayor some day). Will AI ever attain true consciousness and self-reflection and fulfill the roles of countless sci-fi stories? Who knows! We don’t know enough about how consciousness is generated in our own brains to give a definitive answer, but what we do know is that machines modeled after processes in the human brain can replicate human thinking to some extent, and this most certainly will allow them to take various roles in our society, and in our publishing industry.

Editing, publishing, authoring, distributing – all are theoretically possible and in some cases already happening. Will it be a smooth transition? Probably not. People are hesitant to trust the judgement of AI. But as time goes on and AI further grows into the industry, I think AI definitely has a shot at becoming the norm for such tasks as distribution and editing and design, in the self-publishing sector.

Beyond considering the ethical implications of job-loss, there is the consideration of societal reception to AI-produced books. Traditional publishing is “supposed to” act as a filter, to provide us the books worth reading. But if traditional publishing integrated with AI, I think it would lose a lot of that cultural power. There is cognitive dissonance induced by a mind without a human body behind it – because we are biologically, evolutionarily programmed to seek connection with other humans – and learning that a particular artwork was produced by algorithms rather than organic human thought can give it a feeling of emptiness. For that reason of human bias, I do not think the AI’s ability to fulfill those roles will align with employment of those AI. At least in the near future. (Will AI vs human editing/designing/authoring become an important factor in metadata as well?? I think so!)

Publishing Online: Disruptive but in a Good Way

Online business models (or at least an online portion of a business model) are fast becoming an almost-requirement for most businesses in a majority of industries. Publishing, as it happens, is one of the most naturally-attuned industries to acclimatize to the digital revolution—the act of it, but unfortunately not the business models surrounding it. The act of publishing has become so accessible and easy, that creating a business out of it can seem incredibly daunting and nigh impossible. Because if “anyone” can do it, where is your monopoly, and how do you make money off of it?

Absolutely the advent of the internet has greatly disrupted publishing as it once was, but I do not think in a detrimental way. It has reinvented how a vast majority of publishing—the making public of things—is done, has changed a lot about how distribution and publicity is approached, and even affected the kind of content consumed. But I think, in time as society comes to terms with new technologies, it will only prove to be beneficial, so long as we do not approach the measure of such as directly related to vast monetary gain and world-dominance in an infinitely, kaleidoscopically niche industry.

The accessibility of digital publishing has created many platforms of new ways for sharing vast amounts of content, as immediately or delayed as the content publisher desires. But this new flex in content distribution comes at quite a literal cost: revenue.

In the short term it may seem like a detriment, but in the long term, I think new business styles that better reflect our online ecosystem will be found/developed (like they already are). And publishers should be fully embracing this new modality in order to have an open mind in thinking how to make it work, so that new ways of generating that necessary revenue can be found.

Brick and Order From Online

Brick and mortar stores evolved with the advent of the internet, and now internet business models are moving into brick and mortar stores (like Amazon). Is this an evolution or a devolution? How do you see things developing in the future?

 

Thinking about physical and digital stores as either evolutions of devolutions from each other I think relies on incorrect assumptions and treads into the trap of thinking that technology and society is a constant, linear, forward evolution. Much like how the ebook was not the next evolutionary step of the book to completely replace printing (instead both now coexist), the physical and digital retail exist in different spaces and appeal to different crowds.

Brick and mortar did not evolve with the advent of the internet. Brick and mortars still exist. The digital store evolved out of the internet to run beside the Brick and Mortar.

I think Taylor put it best that the kind of business determines the kind of store best employed unique to each business. 

With that pedantry out of the way, I would answer this question with a big ol’ unmistakably decidedly unquestionably unequivocal: it depends. It depends on the business, on its own mission and its own definitions of growth. It depends on how that business is entering the new market.

For amazon, I would say it’s new store is an evolution on two counts. For its own business model, it is expanding into a new market, it is growing the ways people can shop from amazon as well as what a Prime membership nets you. It is reaching into the physical space to secure more of the overall retail market share. Consider that Amazon’s sole goal is to be the “biggest store in the world.”

As well it is an evolution of physical store spaces in general: this is the first time a physical retail space will be so fully integrated with a digital app, allowing people to just walk in, pick stuff up, and leave, and have their app automatically charge their account.

But it might not always be an evolution to go one way or another. A brick and mortar store might think to evolve into the internet space to tap into another market, but botch it’s online interface so much that it affects the overall brand of the store, causing a possible devolution. Just a thought.

I don’t think the physical store will ever die out. I think, buried somewhere deep in the human psyche, is a persistent and ever-lasting need to experience physical life. Maybe for some that need is gone, but it will forever be somewhere, in some portion of our population, because a human mind is far too complex to have the entire species streamlined into internet shoppers. There will always remain the individual who worships the shelves of a bookstore, or delights in the social interactions of a communal shopping space. Anti-social efficiency won’t take over the population because the population is too variant, and so the physical store will always remain to serve certain psychographics’ needs, and the online store will coexist to serve others.

In reference to our in-class discussion, about how Amazon Go is a long-term investment intended to get people used to the quick, easy, efficient shopping experience to make a regular store like Safeway feel unbearably slow: that will only apply to a certain sector of the population. I will forever maintain that there will be a portion of the population who want, or need, that classic brick and mortar experience of social interaction as part of the shopping process.

Copyright Clarified

Copyright is a contentious issue. It’s difficult to come to a consensus, as a society, on what rights a creator should maintain to their product and what rights a consumer should have on it. On one end, their is the individualistic idea that the creator should maintain absolute right to their creation: consumers merely pay for a license to use it as it is meant to be used and they are never truly owners of the object. Polar to that is collectivism, where a creator never owns anything they create, as a person is just a manifestation of the larger society they belong to and it is the societies right to own what the individual creates.

As far as I’m concerned, copyright law should take a spot in the middle that allows the creator to be compensated for the time and effort put into the creation, but just as equally recognizes the fact that no person creates out of a void and is drawing from everything society as a whole has crafted, and so society as a whole should benefit from it.

I do believe that the right to make copies of a thing should fundamentally remain with the creator, and the right to give those rights away belong to the creator (the thing being an expression of an idea and not an idea itself as current copyright law requires). This is the foundation of current copyright law and this might be where I depart a little bit from it.

I will cursorily note that I think the period of copyright is way too long and that everything should enter public domain immediately upon the creators’ death since, just as that creator benefited from society, society must then benefit from them. But what I would rather talk more about is, what seems to me, a growing argument these days about the control of and right to a specific manifestation of a creation. This conversation seems to stem from DRM and how the digital world has disrupted the physical idea of sharing.

 

Although there are a lot of dystopic predictions about how DRM will affect society in the long run, as well as some not-so-great uses of it happening already, the principle of it—to allow us to mimic the limitations of physical world sharing—is important. When somebody purchases something, such as a movie or an ebook, they should not be allowed to copy it and send it to somebody else. Can they invite their friends over to watch the movie with them? Can they lend the eBook reader they purchased the eBook on to their friend to read it? Absolutely. How they use or share their purchased copy of a thing should be completely up to them, except in the case of recreating another copy of it to be distributed.

But the language seems to be changing from ownership of something, to licensing the thing. And in licensing it you are agreeing to access to it so long as you only use it for a set specific intended uses. And the idea of DRM getting out of control and further controlling how people use products they buy is a scary one. DRM is a great tool for enforcing cases of copyright infringement, but as the article points out with GM using DRM to control who can diagnose an engine, it can also allow the creator to maintain too much control of a specific instance of their product that they sold.

 

But I don’t think it’s DRM and the current DMCA Section 1201 that is the root of the problem; I think as a society we tend to allow some of the bigger corporations to continue control of their product beyond sale. More generally, we as a society encourage the idea of ownership and possession beyond what is reasonable. DRM laws are just a single manifestation of it.

Outside of copyright, I would point to the case of Monsanto vs Schmeiser as a comparable example of how this mentality exists apart from DRM. Monsanto essentially licenses the use of their seeds to a farmer on the basis the farmer will not replant the seeds produced by the plant the following year but instead purchase new seeds. To me this is an example of our society protecting unjust overextended ownership of a corporation.

I think in copyright, and in all laws governing the ownership and use of purchased things, it is very important we lean towards the consumer having complete ownership of and right to use what they purchase in whatever way they want, barring reproducing it (unless the creator has explicitly given the right of reproduction away). In order to protect the creator’s livelihood, the right of reproduction is important. But licensing use seems to go beyond that into protecting the creators’ further interests and that, I don’t think, is something that can be justly protected by law, as it begins to lean too far on the individualistic side.

GAFA: a screenplay

Foreword: Due to the scifi-like nature of this blog topic, what with predicting the future involving technology giants, I stylized this blog post as a BladeRunner parody. The first paragraph below, in the scroll box (for aesthetic reason), is nearly word-for-word the intro text from the movie, with nouns changed to fit my hypothetical future.

Late in the 21st Century, THE BIG TECH CORPORATIONS advanced digital evolution into the GAFA phase – a dominance virtually inescapable to a human - known as the Dawn of Data Collection. The GAFA products were superior in spying and data collection, and at least equal in intelligence, to the everyday people who used them. Algorithms were used On-Line as news curators and advertisement placers, in the hazardous exploration and colonization of people’s minds. After a bloody PR disaster by GAFA companies in On-Line news leaks, GAFA practices were declared Not Very Cool - under penalty of being lightly rebuked by people who will inevitably continue to use their products and services. Special Internet Activists - BLADE RUNNER UNITS - had orders to spread the word, upon detection, of any hint of a dystopic GAFA ruled future. This was not called conspiracy theory. It was called educating the masses.

GAFA, the four major tech players (Google, Amazon, Facebook, and Apple), dominate society in the digital sphere, but where will it go in the future? These days it has criticisms, but ultimately, people continue to return to it for questionably essential services. Search algorithms, curated news feeds, simple point and click shopping? Who wants to give those up? In my screenplay, of a sinister future that is, well, today, the big tech companies are already under scrutiny for their business practices and accountability for their effect on people and society is becoming a larger discussion, propagated by the BLADERUNNERS. But, even with more and more people becoming disenfranchised with them as businesses, the people cannot escape their clutch as a useful service. At least, for three of the four. Google, Facebook, and Amazon will play a bigger and bigger role in people’s lives as people continue to depend on their services.

Apple, on the other hand, does not escape the PR nightmare so easily. Apple, being a tech giant that thrives on innovation, depends on its hardware to stand apart in order to remain relevant. Unfortunately, its groundbreaking inventions seem to be ushered further and further off to the side to instead be caught in the typical cycle of most other tech companies – making what they already have just slightly better, and competing directly with what others are already doing. They are, in essence, replaceable if they do not continue to change hardware in ways that are meaningful and captures audience interest. Each product they release is ephemeral, where Facebook, Google, and Amazon have created landscapes that serve longer-lasting, continuous purposes.

Without truly groundbreaking and enticing new innovations to their products, I don’t think Apple will continue to be able to survive after its many slipups – most recently that they purposely slow down older iPhone models when a new phone comes out. Even the technology of a voiced AI assistant that they popularized with the first major use of one – SIRI – can’t keep up with the competition that followed in its footsteps.

GAFA is not just one single-minded tech giant that threatens to control humanity. GAFA is itself a collection of companies that are in competition with one another and GAF is poised to flush out the final A by dominating in the technology fields that Apple tries to compete in, and since Apple doesn’t really have much of a unique asset to rely on anymore, I don’t think it will survive the blows.

Apple dies, Google, Amazon, and Facebook continue to thrive with their mutually independent services people rely on, the BladeRunners continue to fight an uphill battle.

Cut to black.

Cue Synthesizers.

Roll Credits.

Never-Ever-Better-Wasers

In the New Yorker article The information: How the Internet Gets Inside Us, Gopnik discusses how all people fall into three classes when it comes to new technologies: the Never-Betters, the Better-Nevers, and the Ever-Wasers, meaning ‘fully embracing new technology,’ ‘actively against new technology,’ and ‘somewhere in between, recognizing there will be good and bad with each new technology, and that new technologies creating a divided reaction has always been going on,’ in that order.

As with all sets of terms that attempt to divide humanity up into neat little boxes, I think these three terms are pretty reductive. Where people fall would be a spectrum where the Never-Betters represent the polar positive, the Better-Nevers represent the polar negative, and the Ever-Wasers are everything in between. According to these definitions. (I would also like propose that the Ever-Wasers seem like a bit of a side step to the whole categorization here for including the recognition of the societal condition of constant split opinions to new technologies. That recognition should instead be another layer, a yes or no, for all three labels, along the entire spectrum).

There are always going to be portions of society that love the way things are and are resistant to change/developing technologies, and likewise there will always be those who welcome and invite innovation and newness in technologies. But, for the most part, I think society has come to see how often technology gets revolutionized and has come to expect thorough redesigns of how things work as a consequence.

Every time a smartphone, Apple or Android, releases a radical OS/iOS update that completely changes look, feel, maybe even layout and functionality, people gripe about it, because there is a stutter in convenience while they relearn how to use their phone. But then they get used to it, and to revert back to the old OS would seem almost as alien as a new OS. The speed at which OS’s are updated and replaced has created an anxiety of sorts, an almost-constant expectation that, soon, everything will change. The speed at which we, in developed cities in North America, not a global we, experience these constant updates and changes, I think has numbed us to the idea of technological change as we have experienced in our lives, not just researched in history, revolution after revolution of technology.

Therefore Ever-Wasers, being the broadest option, I believe captures who we are as a society more than the other two terms.