Search This Blog

Wednesday, March 27, 2013

Serendipity

Serendipity.  According to Wikipedia, the word was voted one of the most difficult words to translate, but it means a pleasant surprise or a happy accident.  So, you might say that this course, Literacy in the New Media, and specifically all of the work culminating in this last assignment to update and re-focus my blog, was serendipitous.  It was a pleasant surprise to see my blog transform from one focused on school work, to one focused on my life's work.  A natural evolution prodded along by the work that I've done over these last several month.  And it was also a happy accident that just as I finished updating the look and feel of my blog, I noticed that I had just surpassed my 1000th page view.  Now, I know that that a goodly number of those page views were my own, but I'll take the landmark as serendipitous just the same.  A bit of a signpost on the path that my education has taken signifying a bit of a fork in the road.

Image courtesy of Stuart Miles, freedigitalphotos.net

Thursday, March 21, 2013

Balance


The countdown has begun.  Well, a number of countdowns, really.  All happening simultaneously.  The first is the countdown to the end of this course that I have been engaged with, Literacy in the New Media.  The second is the countdown to graduation.  Having successfully passed this course, I will have satisfied the requirements for graduating with a degree in adult education.  And finally, the countdown is on toward the end of this blog as I (and others who have read it along the way with more than a passing interest) have come to know it.  The blog started out as an experiment in critical reflection and then morphed into a tool for ongoing reflection upon my learning and a place to post some of my work and other extraneous thoughts to finally playing out as an exercise in produsage, a “merging of … producer and consumer in an interactive environment” (Bird, 2011, p. 502).

In the remaining weeks of this final course, we have been tasked to reflect upon our experiences within the course, utilizing all that the World Wide Web has to offer in terms of the opportunity to contribute to the digitized cultural landscape, and consider whether or not we are encouraged to continue our produsage.  We have been asked what ‘intimations of deprival’, or what feelings do we have, if any, that something will be missing in society, as we move toward an increase in produsage.  Put another way, as interactivity replaces passivity, in the words of Sterne (2012) what will be the cost, what will be lost? 

I have to admit, I was born and raised in the television-era.  Passive consumption of media was king.  I grew up with 2 channels that changed to 5  when I came of age and we got cable (well 6 channels if you considered public TV a channel back then…we certainly didn’t).   That consumption ramped up when more channels were added via a VCR that had the digital capacity to reach numbers beyond the 13 on our TV’s dial.  I still recall the dark days between getting cable and getting a VCR when the dial on our 19” Magnasonic became stipped after repeated attempts to change from channel 6 to 10 with the lightening twist of the wrist.  Many months were spent trying to tune the TV with a pair of vicegrips.  Those of you over the age of 40 will be able to conjure up the image, I’m sure.  But, I digress.  The point is that my consumption was entirely passive. 

In the words of Lunenfeld (2007) “television doesn’t improve so much as metastasize, spreading out from the den, to multiple incarnations in every member of the family’s bedrooms, into our cars, onto our PDAs, and into ultra-bright “outdoor models,” recently reserved for the ultrarich but soon to be in every backyard space near you” (p. 8).  Well, we didn’t have a television in every room in our house growing up.  And I don’t have that many TV’s now, much less one in my yard, these many years later.  But his point is well taken.  The television acts as an entrancer, a penatrator of consciousness lulling you into a passive consumer state, drinking it all in with little effort.  Contrast this to the age of interactive computer technology: “Contemporary media beg for and sometimes demand active participation. They ask their users to intertwine them with as many parts of their lives as possible. It is not just so-called social media (a misnomer if there ever was one—since all media are by definition social). Magazines and newspapers implore us to write back and explore on multiple platforms. TV shows ask us to go online and participate in discussions and games, books get their own Facebook pages where readers are asked to “like” them, software companies put together “street teams” of users willing to promote them in a manner analogous to what concert promoters used to do” (Sterne, 2012).

So am I more willing and eager to go forth and engage as a produser in this brave new world of interactive media?  And do I have any concerns about what we are losing as we move from passive to interactive engagement?  The answers are no and, relatedly, no.  I work at interactive produsage all day long already.  It is integrated in how I make my living and how I continue to grow my career to ensure that I can continue to make a living in the future.  I have to interact with the digital world because it is both a source of information and a place to share and grow.  I am committed to lifelong learning.  Long ago, I learned that buying into that concept was really the only way to ensure some sort of stability in my life.  The World Wide Web is the place where much of that lifelong learning can take place.  So, I am not willing and eager to go forth and interact even more than I do now.  The truth is that I like to take a break from that interaction from time to time.  I think that most people do.  They are interested in simply sitting back and just consuming.  And I don’t think that there will ever be a time when that balance between produsage supersedes consumption. 

 Lunenfeld (2007) speaks about consumption and produsage in terms of uploading and downloading.  Consumption is downloading and interacting and contributing to cultural content is uploading.  He speaks about the symbiotic relationship between the two: “to claim that downloading is inherently harmful and uploading innately positive would be nonsense. The two syndromes are complementary, but to function in an evolved mode, they should be balanced. The watchwords are to be mindful in the consumption of culture, or downloading, and meaningful in the production of it, or uploading” (p.11).  I couldn’t agree more.  In the days ahead, my blog will undoubtedly change as I try to strike a new balance in my own produsage in the face of completing this course and my degree. 
References

Bird, S. E. (2011). Are we all produsers now? Cultural Studies. 25 (4-5), pp. 502-516
Lunenfeld, P. (2007). History as Remix: How the Computer Became a Culture MachineRue Descartes no. 55: Philosophies entoilĂ©es. Online [PDF]
Sterne, J. (2012). What if Interactivity is the New Passivity? FlowTV. 15.10. Online

"Stacking Stones" image by Michelle Meiklejohn, freedigitalphotos.net

Monday, March 18, 2013

Participating in the narrative

This week we were asked to consider the opportunities that social media provides for citizen journalism or social activism. Specifically, we were asked if these new opportunities encouraged our participation; are we more apt to participate in social action or at least contribute to stories that are topical, if not important, to the development of our society. My answer to this question is, “I don’t think so”. The technology has certainly afforded me opportunity to join in and even initiate on occasion. But, while the technology is convenient and offers me many opportunities to engage in activism from the comfort of my flannels and favourite T, it doesn’t get me off the recliner. The very fact that these new opportunities exist, hasn’t encouraged my participation. Rather, circumstances have encouraged me to participate, to be socially active, to contribute to the narrative.

Bruns and Highfield (2012) suggest that citizen journalism is more often than not, news curation or a continual (re)framing of a story. “Commentary responds to existing, already published news and opinion; it collects, collates and combines these existing materials, contextualizes then and thereby points out new frames for their interpretation and analysis” (p. 6). My attempt at publishing a ‘story’ using Storify is a great example of this citizen journalism that Bruns and Highfield talk about. And, I suppose what I have been doing with my blog these past many months, responding to articles that I have read and reflected upon, is another example of citizen journalism, albeit less topical and for most of those outside (and some inside) the adult education realm, less interesting.

Bruns and Highfield present the argument that this type of journalism is ambient. Billions of producer-users are constantly watching as stories bust out of the cyber-gateway and develop, circulating among other producer-users who contribute commentary and links to other stories that provide verification of content or just another angle to view it from. But, if this type of participation in the development of the story is ambient does that mean that it is a watered-down version of true activism? Does the technology afford people the opportunity to participate but at the same time dumb-down our participation? And is the result an insipid bland version of a truly moving story? I tend to think that it is the exact opposite. While there is something to be said about a great investigative journalist, a truly gifted yarn-spinner, much of the journalism that I grew up reading was anything but. Rather, most of it was a tired attempt to get the facts down on paper to meet a deadline and fill a quota number of words or column inches.

Hermida (2012) suggests that journalists claim an ability to “interpret and represent reality. The practice of verification bestows journalistic communication with its credibility and believability” (p. 661). It’s this verification that some claim is missing from citizen journalism. However, verification is still there. It still exists. It’s just a collaborative and fluid approach to verification. As the story develops, from the moment that it is simply a tweeted experience to the moment that it is a feature-length documentary, the story is unfolding and being verified along the way, from the unvarnished truth to the polished truth. Verification, as Hermida suggests, has reverted back to a burden on the audience. Bruns and Highfield call this multiperspectivality.

I recall my time as a managing editor of my college newspaper. My job at the paper was a pretty sweet gig. In addition to having access to an Apple MacIntosh to complete my assignments for school (I hated booking time at the computer lab), I only really “worked” one night each week, balancing the books and managing a group of editors/writers (and eating pizza). I also had to write the occasional editorial. At that time, the mid-1990’s, newspapers were being bought up by huge multi-national corporations. It was feared that media moguls such as Rupert Murdoch would own the news and with that ownership they would have the power to dictate the stories that were told and how they were told. Call it uniperspectivality. Feeling inspired, I wrote a tongue-in-cheek editorial calming the fears of the student body; the student newspaper would remain student-owned and student-run. Their voice would still be heard.

One of the reasons for bringing you, the reader, down this little trip along my personal Memory Lane, is to make the point that the World Wide Web is really a place where millions of little student newspapers can exist alongside the multinational conglomerates. And the stories that they tell are no less compelling and often richer for the diversity. The other reason for telling you this story, is to bring me back to my assertion in the opening paragraph of this posting: circumstances encourage my participation rather than the technology and the opportunity that it affords to contribute to the narrative. Back in college, my participation in the narrative was circumstantial. If I had the time and felt compelled to write an editorial, I did. Now, the same circumstances draw me to participate.

References:

Hermida, A. (2012). TWEETS AND TRUTH: Journalism as a discipline of collaborative verification. Journalism Practice. 6:5-6, p659-668.
Bruns, A. & T. Highfield. (2012). Blogs, Twitter, and breaking news: The produsage of citizen journalism. pre-publication draft on personal site [Snurb.info]. Published in: Lind, R. A. ed. (2012). Produsing Theory in a Digital World: The Intersection of Audiences and Production. New York: Peter Lang. p15-32.

photo courtesy of Stuart Miles freedigitalphotos.net

Thursday, March 7, 2013

Check out my story

Just published an article on Storify. Take a peek and let me know what you think.

Sunday, February 10, 2013

Making music and making money

Here is my first attempt at recording a podcast using Soundcloud, an online "social sound platform where anyone can create sounds and share them everywhere".  As I state in my podcast, I am providing a response to Ian Condry's 2004 article in the International Journal of Cultural Studies, titled Cultures of Piracy: An Ethnographic Comparison of the US and Japan.  Specifically, I respond to his explanation of the recorded music industry's concerns regarding the illegal downloading of music and how it will spell the end of music making.  Have a listen and post a comment if you like.  As I often do in my blog, I will share with you some of my learning in exploring this brave new world of new media. After much fiddling about with microphones and audio levels, I discovered that trying to record audio in close proximity to a cordless phone plays havoc with the quality.


 
Reference:
 
Condry, Ian. (2004). Cultures of Music Piracy: An Ethnographic Comparison of the US and Japan. International Journal of Cultural Studies. 7 (3), pg. 343-363 

Image by seaskylab, freedigitalimages.net  
 
 

Saturday, February 2, 2013

My attempt at posting to YouTube

After much frustration and way too much screen time, I have posted a video to YouTube.  I created it using Google's Picasa.  It didn't turn out exactly as I had hoped it would.  There are some issues with the timing of the captioning.  For some reason, when I uploaded it to YouTube, the captions were delayed to the last couple of seconds of each slide.  And, I had originally hoped to do a voiceover rather than using captions.  Unfortunately, I've discovered that my thinkpad doesn't have a good mic and recording audio to a video, via Picasa or YouTube isn't a simple endeavour.  Consequently, I had to pare my script down to bare bones.  I did learn quite a bit about Creative Commons and hope to expand on this in a later post. This video is kind of a prelude to such a discussion; is digital creative commons ushering in a 21st century evolution of Paulo's philosophy of education as the practice of freedom?  Anyway, check out the video and if you want to make a comment, feel free, particularly if you have any ideas about fixing the captioning.

Friday, January 25, 2013

Belly up

In my last post, I stated that I that my contributions to online content are inhibited by self-doubt.  I am often plagued with the question, “Does what I have to say and how I say it have any appeal and traction with anyone out there in the World Wide Web?”  Some might say that if someone can post videos of their cat falling into their toilet and get a ton of 'likes' then certainly what I have to say on issues pertinent to me should have some truck with the billion-or-so other consumers of online content.  So just get over yourself.  Point taken.    

One concern that doesn’t dog me, however, is the origins of my ideas and who might take them and build upon them.  I borrow frequently from items that I’ve read, seen or heard, on the Internet and elsewhere.  Working toward my degree in adult education for the past decade and a bit, I know that knowledge is constructed from many parts and pieces rather than magically created.  And, as a perpetual student, I’m no stranger to citing work, giving credit where credit is due.  As such, if someone can borrow my ideas to further their own learning, well then, provided that they give acknowledgment, that’s just fine by me.  This is all just a part of the learning process.  We beg and borrow (and sometimes steal), and others do the same.  In the end we all benefit, no?  Well not if you’re in the business of creating knowledge and culture, dependent on your ideas for making ends meet. 

If someone takes your ideas, creates something similar (even if it is better), and accepts tolls using your toil, this is a real problem.  Not only are you out the profit from your efforts, but you are less likely to come up with new ideas of your own in the future.  What’s the point if you’re just going to lose out to someone who has the gift of grab?  Therein lays the rub.  How do you create an atmosphere where ideas can be created and shared while protecting the rights of innovators to make a buck from their creations?  The answer, for the past few hundred years has been copyright laws and the concept of intellectual property.  These allowed innovators to stake their claim to their ideas, brand them as you would livestock, something that told everyone, “Hey, this belongs to me.  Move it along.  Eyes on your own paper.”   But, in the age of Internet ,the idea-soup is  getting progressively murkier, more stew-like than ever before, calling into question laws and concepts that have held true for so long but now appear so last-millennium.  This leads me to the question that we were tasked with answering this week:  How can online communities of "producer-consumers" literate in new media work toward building a robust and freely accessible cultural commons in the face of restrictive copyright laws? Or, in keeping with my gastronomical metaphor, with the info age upon us, where ideas, culture and information are accessible to just about anyone, anywhere, how do you create a common pot where creativity and innovation can simmer and brew without burning its contributors? 

Henry Jenkins (2004), in the International Journal of Cultural Studies describes the conundrum like this: “Thanks to the proliferation of channels and the portability of new computing and telecommunications technologies, we are entering an era where media will be everywhere and we will use all kinds of media in relation to each other… Fueling this technological convergence is a shift in patterns of media ownership. Whereas old Hollywood focused on cinema, the new media conglomerates have controlling interests across the entire entertainment industry” (p.34).  So, while technology is providing an avenue for cultural convergence across all media, large multi-media companies are rapidly invading the cultural commons, controlling it with restrictive laws and a phalanx of lawyers.  

The cost of this control not only affects the little guy or gal trying to produce and publish his or her own ideas.  It also has global implications.  As Toby Miller (2004) states: “Whereas culture has frequently permitted the South a certain political and social differentiation, the ‘third world’ has not been allocated a substantive role under the new arrangements [of the new global economy] beyond providing a kind of anthropological avant-garde laboratory for music, medication and minerals.  The costs of compliance with the WTO Agreement on Trade-related Aspects of Intellectual Property Rights divert money away from basic needs and towards costly computer equipment and costly bureaucrats with the skills and resources to evaluate and police copyright, trademarks and patents” (p. 59).  The globalization of our economy, then, adds a multinational flair to our creative stew. 

In a recent blog post, my classmate, Ann, argues that the question that we were tasked with answering this week is incredibly complex and fluid, making it difficult to even wrap your head around much less answer.  She contends that the question of how we work toward building an accessible cultural commons in the face of restrictive copyright laws would be better framed as: “How do we balance the right of producers to be appropriately compensated for their work (not in perpetuity, but to enable them to earn a living) against consumers who are increasingly entitled regarding accessing material on the World Wide Web”.    I agree that it is a question of balance.  And I would argue that this constant push and pull, this perpetual teeter-totter of re-negotiating boundaries can sometimes set the stage for creativity, if for no other reason than to ‘stick it to the man’. “Innovation will occur on the fringes; consolidation in the mainstream” (Jenkins, 2004, p. 35). 

But I am also thinking that there is still validity to the question, how do those of us who are producer/consumers of online content, through our production and consumption practices, ensure that there are still fringe contributors to the cultural stew (last food reference, I promise)?  How do we ensure that those folks aren’t pureed (okay, I lied) by the legions of litigators representing the multinational media conglomerates?   The answer to that question isn’t easily found as the articles and blogs that I have referenced aptly attest.  However, I think that there are some simple tenets that we can follow to support the maintenance of a cultural commons.  Contribute without payment and encourage others to build on your ideas.  Acknowledge the sources of your ideas.  And lastly, pay for content when you can.    

References:

Jenkins, H. (2004) The Cultural Logic of Media Convergence International Journal of Cultural Studies March 2004 7: 33-43
Miller, T. (2004) A view from a fossil. International Journal Of Cultural Studies, 7(1), 55-65.

Image courtesy of chawalitpix, freedigitalphotos.net
 
 

Thursday, January 17, 2013

Encouraged and inhibited


  
This past week my classmates and I were challenged to consider our engagement with online media, as consumers and as producers, outside the scope of our most recent online educational endeavour.  Initially, I thought that I was more of a consumer than a producer and even as a consumer, I wasn’t consuming that much.  But the more I thought about it, the more that I’ve discovered that I am equal part consumer and producer of online content.  And, as a producer of online content, I do so with much trepidation, concerned that my voice is too, well, old.  I’m just not that compelling.  At least not according to the “new” definition of the word, evident in most of what’s out there on the web. This fact, more than any other, both encourages and inhibits my contribution to online content.
 
Lucas Hilderbrand writes about YouTube, the website where millions of videos, some original and some excerpts or full reproductions of mass media, are posted for anyone to find and view.   YouTube is an example, he contends, of a shift in how we consume, and consequently produce online content.  “Like memory (cultural or personal), YouTube is dynamic. It is an ever-changing clutter of stuff from the user’s past, some of which disappears and some of which remains overlooked, while new material is constantly being accrued and new associations or (literally, hypertext) links are being made” (Hilderbrand, 2007, p. 50).

Hilderbrand’s  analogy made me wonder, what happens when our memories are stockpiled, not with snippets of narratives, but clips wholly unto their own?  “In the culture of the clip, spectacles, stunts, cuteness, pop culture references, and exhibitionism all trump narrative” (Hilderbrand 2007, p. 51).

Teresa Rizzo, in her article on Scan, an online journal of media arts culture, likens YouTube to Tom Gunnings' concept of the “cinema of attractions...based on spectacle, shock and sensation”.  Gunnings' concept was developed in reference to early film produced at the beginning of the last century.  Rizzo contends that YouTube and sites like it are similar in that they are designed to shock rather than tell a story.

So, what happens when we take away the context, the narrative?  What are we left with? I’m not entirely sure because, as I alluded to earlier, I am more of a consumer of narrative than clip, but I see that balance tipping the other way, generationally.  And, if the narrative does give way to the clip, I suspect that we will lose something in that process.  That loss will be the art and wonder of the story.  That undercurrent behind the visual or auditory spectacle that sticks with you and keeps you thinking long after the image and sound fades. 

So this leads me back to my earlier assertion that I am at once inhibited and encouraged to contribute to online content.  I am concerned that what I produce, largely narrative, won’t capture the imagination of the “clip” generation.  But I am also compelled to contribute in a narrative way to maintain the art of the story. 

References:
Hilderbrand, L. (2007). Youtube: Where Cultural Memory and Copyright Converge. Film Quarterly. Vol. 61, No. 1, 48-57
Image courtesy of Maggie Smith, freedigitalphotos.net

Sunday, January 6, 2013

Unfettered?

I’d like to expand on my prior post, on this notion of being fettered. The freedom promised by communication technology is a farce, an illusion. We live in an age where our technology affords us a level of mobility like nothing we have experienced in the past. Temporal and physical spaces no longer define our lives; we can live in one location, work in another, and establish, develop and maintain social ties across vast distances. But do all of these affordances mean that we are any freer now than we were in the past? Have we not simply exchanged one set of chains for another? Have we not traded our attachment to places for attachment to devices? I would argue that we have. 

According to Campbell and Park (2008) “we have entered a new personal age of communication technologies. That is, the communication technologies predominant in today’s society, particularly mobile telephony, are characteristically personal in nature” (p. 372). Now, this is not to say that our communications are more personal, but rather our interaction with technology is decidedly more personalized. It’s all about us, or, more precisely, me. The ‘me’ being whoever ‘you’ are, so long as you are connected. Confused yet? Let me explain.

Nowhere is the personal age of communication technology better exemplified than in public spaces. Walk into any park or mall and what will you see? Many people engaged with their devices, checking emails, tweets, posts and the like, swiping, tapping and typing. I have noticed this at work, particularly in the minutes before a meeting is about to start. A group of people sitting around a boardroom, pleasantries taken care of, madly working their devices, responding to and sending messages across the ether. “Mobile communication around copresent others not only personalizes public space, it also personalizes the communal experience of being in that space” (Campbell and Park, 2008, p. 379). We are no longer participating in the construction of a common space between copresent others. We are not finding common ground and in that common ground, the freedom of knowing that we are not alone. When we engage with our smartphones as intermediary to communicating with the outside world, we have more than just checked out. We’ve checked into our own personal world in a way that excludes all else. 

Campell and Park (2008) explore this personalization further: “Previous to the adoption of the mobile phone, individuals would have more bounded interaction with friends. They would perhaps save bits of information in anticipation of their next meeting and then use that time to update each other. The mobile telephone means that there is no longer the need to deal with this backlog of information. The members of a social group are frequently updated as to the issues and events taking place among their peers” (p. 380). So, we’re no longer wasting time hearing about what has happened to each other in between the times that we actually speak to each other because we already know via the multitude of tweets, posts and so on….I guess that’s more efficient. But does this free up our time so that we can discuss more pressing issues when we actually do get together? In her blog, Katie Benedict reflects upon her personal relationship with her mobile phone: “I am always checking my text message, facebook posts, tweets and emails. If I don’t see my red light blinking on my phone I feel, “what’s going on?” I sort of feel lonely.” So, as I have witnessed around the boardroom table, time spent in social situations with others is more often spent attempting to churn up and consume even more information, reading up on and providing more updates via our devices. And I suspect that the time spent waiting on the blinking red light, is further time taken from engaging in the here and now. 

Walker et al (2009), studied the iPhone as an example of an emergent product design strategy that engages users, knowingly or not, in the personalization of their device and the further development of the technology. “Far from being rigid, fixed, bureaucratic and very ‘technology-like’, the iPhone is instead open, flexible, adaptive, with a lot of underlying technology largely hidden from view” (p. 206). Products like the iPhone are designed to set the stage for personalization and consequently, further development. They are built not with an end product in mind, but rather a set of conditions that will allow for further development. While this may be true, Goggin (2011) suggests that “the playground of apps [the software that allows the user to personalize the device] remains tightly controlled by particular corporations—such as Apple, Google, Samsung, Nokia, and others—and the rules of the apps stores that each has created” (p.150). In his blog, Mike Mitchell asserts that, “apps allow us to track nearly everything we do and with more ease than ever before. They allow us to do the things we want faster, easier, and more inclusively. Like any other market, the market for apps is subject to the laws of supply and demand. It searches for profit first, community interest second.” I couldn’t agree more. It’s supply and demand and profit and loss that ultimately dictates what apps are available. So, while we may feel a certain freedom by personalizing our devices with our ‘own’ apps, it’s the companies that supply those apps who have the ultimate control over how we interact with our devices and consequently how we employ them to communicate and interact with our world.

In my previous post, I talked about getting out from behind the LCD and back to the social side of human services. While I was chained to my desk, I noticed something about myself. Social skills are like any other. Use it or lose it.   I was becoming more socially inept as days past sitting in my cubicle, interacting almost exclusively with hardware and software. This was one of the motivating factors that led me to my present vocational direction: the need to trade interaction with electrons for interaction with neurons, hardware and software for wetware. 

But what I found when I accepted my smartphone, was a world of ubiquitous communication, facilitated and mandated by mobile electronic devices. This wasn’t my first experience with the communication technology of the information age, but it kind of closed the loop for me. Work was the last bastion, ironically, where communication had its limits: the confines of my cubicle and my workday. Such is the world that we live in, though. Through mobile communications, we have exchanged wires for microwaves and, in doing so we have also traded the option of turn-off and tune-out for the promise of all the time and anywhere.

Ann blogs about her experience as a baby boomer teacher of Gen Y students in this personal age of communication technology. “How do I work with this phenomenon – do I integrate it – ignore it – ban it – give up? I text – some; I have not (as yet) linked my workplace email to my phone – I do not want emails to reach me at all hours at which they are sent; do I take my iPhone to the bathroom – no – but I have thought about it! I can live without mobile phone access 24/7. Can (and should) Gen Y – who are my student base? “ Those are salient questions that illustrate a struggle and frustration similar to my own, I think. To conclude this post, I’ll attempt to sum up my reflections on my relationship with ubiquitous communication technologies.

I, like many others of the human race, have a compulsion to communicate. I really do. Just not with you. And, not with all of my friends and family and coworkers, present and past. And, certainly not with a multitude of people that I don’t know. Not all the time anyway. And truth be told, probably not often enough. So, this is the dichotomy that I struggle with and it is exacerbated by the ubiquitous nature of communication technologies today. To the point, I marvel at the apparent freedom to communicate anytime anywhere, but I abhor the compulsion to communicate all of the time, wherever I happen to be. I am at once free and shackled by ubiquitous communication technology.   

References

Campbell, S. W. and Park, Y. J. (2008), Social Implications of Mobile Telephony: The Rise of Personal Communication Society. Sociology Compass, 2: 371–387.
Goggin, G. (2011). Ubiquitous apps: politics of openness in global mobile cultures. Digital Creativity, 22(3), 148-159.
Walker, Guy H., Stanton, N., Jenkins, D. and Salmon, P. (2009). From telephones to iPhones: Applying systems thinking to networked, interoperable products. Applied Ergonomics. March 2009, 40(2), 206-215.

Image courtesy of wandee007, freedigitalphotos.net
 

Saturday, January 5, 2013

Shackled and chained


For those of you who have read my prior posts and may have been following some of my musings on my vocational meanderings, you will know that I work within the social services.  For much of the recent past, say the last few years, I have been working behind the scenes, involved in the analysis and design of social service delivery models, processes and systems.  Such work, although collaborative at times, saw me spending much of my time researching and writing, eyes glued to the screen, chained to my desk, as it were.    Most recently however, I had the opportunity to return to the more ‘social-side’ of the human services field.  I’m not working in the trenches, the front-line, though.  Rather, I am working in management.  A little removed from carrying and managing a caseload, but challenging in its own right and certainly more social than the backend work that I had been doing for some time. 
 
I was excited to begin my new job.  I had worked in management a number of years ago, was a little disillusioned when I left, and was eager to give it a go again, armed with greater experience and knowledge and generally more maturity.  Also, I was eager to get out from behind my LCD, start interacting with people, rather than machines, speaking language with nuance and feeling rather than statistics and outcomes.  When I walked into my new office and sat in my new chair, I remember thinking, “This chair isn’t exactly comfortable, but that’s okay because I’m not going to be sitting in it too often”.  And, indeed I haven’t.  I’ve busted out of those chains that bound me to desk and screen.  But before you presume that this story continues with a soliloquy about how I emerged from the cubicle dungeon to take on the role of transformational leader, engaging my colleagues in participatory organizational change, think again.  That’s a story (perhaps) for another day.  Rather, the point that I want to make in this post is that I may have broken free of my desk, but I’ve traded the handcuffs of my 17 inch LCD for the tiny shackles that firmly hold my thumbs to the keyboard below a equally tiny 3 inch LCD. 
The smartphone that I was issued shortly after starting my new job found me sending and answering emails at all hours, in all sorts of places.  Any gap in time during the day was no longer filled with small talk with others but rather furious typing on those little devices.  I was interacting with others but not really.  I was interacting with my smartphone while communicating with others in staccato bursts, taking full advantage of its ubiquitous nature. 

Ubiquity is the concept of anytime, anywhere.  Mobile technology and its convergence with the Internet, embodied in the smartphone, give rise to the notion of ubiquitous information and communication:  the ability to access any information, at a moment’s notice, from anywhere in the world and communicate that information just as immediately.  Goggin (2011) asserts that this is a farfetched and far-off reality (p. 149).  We really don’t have access to all information all of the time and we are really not able to communicate instantaneously.  But is it really that farfetched and far-off?  Campbell and Park (2008) speak about the emergence of biotechnology and the “growth of sentient objects, that is, information and communication technologies embedded in the surrounding environment” (p. 383).  Could there come a time when our personal biotechnology interacts with sentient objects in our environment to provide our body and minds with information even before we think to ask for it or pause to wonder?  Could there come a time when we send an email to someone simply by thinking about it?
So I’ve traded the chains of my desk for the shackles of my smartphone.  Will there come a time when I trade those tiny shackles for something else entirely?  What will those bindings look like and, more importantly, how fettered will I be?
 
References:
 
Campbell, S. W. and Park, Y. J. (2008), Social Implications of Mobile Telephony: The Rise of Personal Communication Society. Sociology Compass, 2: 371–387.
Goggin, G. (2011). Ubiquitous apps: politics of openness in global mobile cultures. Digital Creativity, 22(3), 148-159.

Image courtesy of Pong, FreeDigitalPhotos.net

Saturday, November 24, 2012

Frustrating and messy...


What started out as eagerness to engage in a collaborative effort to construct knowledge soon turned into a progressively frustrating exercise these past few weeks. Our class was tasked with creating a wiki, a collaborative article on a subject in popular culture.  The subjects were “stubs” - articles requiring further information – in Wikipedia, the online collaborative encyclopedic repository of information and knowledge.  We could work on our article independently or in groups, but we would all be required to contribute to articles outside our own authorship.  At the end, we would post our articles in Wikipedia. 

Well, I’m not a pop-culture connoisseur.  I’m not really a “buff” of any kind.  Music, cinema, literature, television… they’re all just passing fancies of mine.  I like the distraction and I’ll even delve a bit deeper into something that is more interesting to me, but not at any level that you could consider fanatical and usually not into anything that could be considered popular at the time.  I’m just not really in-tune with the current goings-on.  I guess I could blame that on my digital video recorder.  Pop culture is the life-blood of so much that graces the small screen between the few television shows that I record and watch, and I simply fast-forward right past it.  But pop culture goes beyond that.  It is pervasive in all forms of media.  The fact is that I’m just not that engaged.  So the blame really rests on my inner [grumpy] "old-man” who is increasingly making himself known to all around me with phrases like, “Is that what the kids are into these days?” or, more directly [grumpily], “What the hell is that?”  So my lack of interest in the subject matter was the initial reason for my frustration.  Selecting a subject to research and write about in a genre where I have little interest or knowledge was a bit daunting. 
Finally after much searching and deliberating, my partner and I selected Jake Gold as the subject of our wiki.  Gold is most commonly known (at least to me) for his work as a judge on Canadian Idol.  But through some less-than-scholarly research, I discovered that he is quite an accomplished and well-respected manager in the Canadian music biz.  In fact, he managed the early career of a band that I had more than a passing interest in during my post-dropping-out-of-university, pre-finding-some-direction-for-my-life years.  So now I had some connection to the subject matter, something to get the mental gears grinding.  But as I said, that was just the beginning of my frustration.

 We next set to the task of drafting our article.  But this isn’t as simple as crafting a document in a word processor or even through a web-based interface such as a blog.  Wikis have their own language, their own rules for presenting and organizing information.  And the wiki-to-English dictionary available out there on the tangled World Wide Web isn’t that clear either.  I suspect that it was crafted by a bunch of people in a wiki as well – more insight into my rationale can be found in the next paragraph.   But after much back and forth with cheat sheets and less-than-helpful help articles and videos and after many hours of squinting at symbols and letters in 8 point Courier font, we finally produced an article that looked and read like something you might find on Wikipedia.  And it had some information that may have been of some use to somebody somewhere.  That was until some of our classmates provided their contribution.  Enter the next phase of frustration and much Lewis Black-esque ranting and raving on my part.

As others contributed to our article, it became less and less our own.  We had lost control of the content and the format.  One misplaced backslash by a contributor and I was thrust into many hours of hair-pulling punctuated by exasperated expletives.  What’s more, after reading content that I had re-formatted, I found myself saying, “Is that right?”  And after re-researching I found myself saying, “No, that isn’t right at all!” It was at this point that I arrived at a revelation:  If I had struggled with the format and content for our article, and others had struggled in their contributions, what were my contributions to others’ articles like?  What were we really creating here? Knowledge?  Not likely.
Manuel Castells, in his 2005 paper, The Network Society: from Knowledge to Policy, takes umbrage with the term ‘information’ or ‘knowledge’ to describe society today because knowledge and information have always played a critical role in our society no matter whether we were progressing from mere survival to agricultural sustenance or from rural living to industrial life in cities.  Rather, he suggests we now live in a network society broader in reach and potential than at any other time in our history, aided by communication technology.  And he asserts that we are now at a crossroads where “unfettered communication and self-organization” are “challenging formal politics” and creating a dichotomy: we want to “praise the benefits” of a networked society, but we fear losing control (p. 20).  Sound familiar? Well, it did to me.

I felt the sting of that double-edged sword myself: keen to engage in a collaborative effort with a network of people to construct some knowledge but frustrated by the lack of control that I had over the final product.  My classmate, Ann, provided her assessment of Wikipedia this week in her blog:  “Wikipedia is maintained by thousands and thousands of volunteer authors and editors and we can now number ourselves among them.  In essence, Wikipedia is an information repository by the people, for the people.”  Wikipedia, she contends, is an exercise in democracy in this Information Society.  I’d have to agree with her contention.  But I’d also have to add something that many a politician and political pundit have said: democracy is messy.    

Thursday, November 8, 2012

Bigger, better, faster...the economy of the info society

In his 2010 research paper, The Life and Times of the Information Society, Robert Mansell states:

“We might expect an interdisciplinary body of intellectual inquiry to have emerged during the past 50 years or so since scholarly work started to focus on issues around information and communication control systems...  However, …it is mainly, though not exclusively, insights arising within the discipline of economics that seem to influence policy makers, albeit indirectly, in this area.  This has major consequences because it means that many of the important social dynamics of societal change are persistently downplayed.   This process of exclusion of certain issues from the agenda of policy makers is aided by the continuing dominance of what is called here the ‘Information Society vision’”  (p.166).

It appears then, according to Mansell, that the study of an information society is largely focused on the economics of such a society, how the production and consumption of information supports the production and consumption of material wealth. And, the consequence of this focus is an ignorance of the social dimension - how does the proliferation and accessibility of information contribute to or detract from a more livable society? 

But is this predilection toward the study of the economics of the information society really just a characteristic of scholarly research and inquiry?  Or is this a characteristic of today's society in general?  I mean, let’s take the idea of the information age out of the equation here.  Generally speaking, are we not living in a materialistic society, one where economy is of chief importance in all things?

Image courtesy of Stuart Miles, freedigitalphotos.net

Thursday, October 25, 2012

The Information Management Age

In a talk for Festival del Diritto (Festival of Law) in 2008, David Lyon, research chair at Queens University and director of the Surveillance Studies Centre, stated, “The emergence of today’s surveillance society demands that we shift from self-protection of privacy to the accountability of data-handlers.” Hmm. Is that realistic? I mean, I’m all for having data-handlers accountable for the information that they collect, for whatever reason. I wish that data handlers would feel the same responsibility for my personal information as I do. I wish, like me, that they would have a moment’s pause everytime they click “save” or “post” or “publish”. I also wish that they would spend a proportionately equal amount of time and money on securing the information that I and many others have entrusted to them, knowingly or otherwise. But, how does the saying go? “If wishes were horses then beggars would ride”.

Bottom line: it’s all well and good to hope that data-handlers will protect our privacy, but the mountains of data held by the the ever-growing hoards of data-handlers makes the prospect of holding all of them accountable for protecting our privacy as much of a pipe-dream as holding the proverbial butterfly accountable for creating the hurricane. So, if holding the data-handlers accountable is a wouldn’t- that-be-nice solution, then we’re left with the idea of self-protection.

The reality is that we are living in an age where we are required to manage our personal information more than ever before. A slip of the tongue is forgotten with time and can even be denied later on. A slip of the keystroke, however, is forever burned on some hard drive somewhere, easily retrieved and brought into the light of day as evidence of not only who you were, but who you are now and who you hope to be in the future.

Now, I consider myself to be a cautious user of the world wide web, careful with what I put out there for fear of what might stick and come back to bite me in the ass. Don’t get me wrong. I’m not the paranoid-type, but I am a rather private person outside the virtual world so it only makes sense that I would be that way inside cyberspace as well. And, truth be told, I’m lazy and tend to lean toward the simple. I find protecting my personal privacy a tiresome endeavour most of the time anyway, so I really don’t go out of my way to make things more complicated by adding even more information into the cyber-cesspool. 

But that’s me. When I read a blog post of one of my classmates this week, I was taken aback. In that post, Marnie writes: “The participants of Facebook are getting younger and younger every year. I was a counselor over the summer, and when returned home it was shocking how many of my campers that were the age of 6 had a Facebook profile. When you’re that age, you are not aware of the consequences of putting too much information on your profile.” No kidding. At that age, you don’t even know what a profile is, much less what it says about you. How can a six-year-old know about issues such as privacy and protecting your personal privacy? How can we credibly expect a six-year-old, or even a 16-year-old, to effectively manage their personal information. When I think back to when I was even 18 years old, I had difficulty managing the information contained in my wallet. I can’t tell you the number of times I sat pondering, now where did I last use my wallet…7-11? No, I stopped at McDonalds after that, and then I went to the library…

But now this is the information age that we live in. Kids have to learn to manage more than what’s in their wallet. They have to manage more than the identity that they are still trying to develop through their interactions at school, first jobs, and other social situations. They have to manage all of that information that they enter into the electronic ether with a few taps on a keyboard or a click of a mouse. And they have to manage an identity that is developed in an e-society that catalogues all that they say or do for all to see, now and forever. 

Jaysus.... I miss my wallet. Now, where did I put my cellphone?

Image courtesy of smarnad, freedigitalphotos.net

Wednesday, October 24, 2012

Little devices

While flipping through my favourite radio stations this morning, I happened upon this interview on Metro Morning.  Host Matt Galloway spoke with Isabel Pedersen, Canada Research Chair in Digital Life, Media and Culture at the University of Ontario Institute of Technology.  Interesting discussion about how our devices influence and even change our identity.  This interview was quite timely after having read a blog post by a classmate of mine, Ann.  In response to Sherry Turkle's statement, "The little devices in our pockets are so psychologically powerful that they don’t even change what we do, they change who we are", Ann asks:
As I post this blog entry and prepare to launch into the Twitterverse, does this change who I am?  And if so, is this something I want to meet, or to run from?

Friday, October 19, 2012

Rage against the machine

Sherry Turkle, psychologist, professor and scholar of the information age and its impact on society and the self, once wrote, "We come to see ourselves differently as we catch our image in the mirror of the machine". That was back in 1999. The machine was still in its infancy. Back then, she went on to write that our concept of ourselves, our identity, is being "recast in terms of multiple windows and parallel lives".

I think that we have always had multiple windows through which to view and present ourselves and to a certain extent, we have always led parallel lives. The difference in the age of social media is that the multitude of windows in the machine that Turkle wrote about in 1999 has since grown exponentially; the number of people looking through them, nearly infinite. And, the only limit to the number of parallel lives that we can create online in 2012 is limited only by our ability to keep track of the accounts and passwords (and even then, there's an app for that).

Yes, the machine has grown significantly in the past 13 years. You would think that with such growth there would be many corners to hide in, many places to carry on our parallel lives without fear that they would ever intersect. This is not the reality, however. If anything, the virtual world has become more transparent.

Recent stories making the news (and trending on social media sites) bear out this new reality. Case in point, the miscreants who posted disparaging and thoughtless remarks on social media sites memorializing Amanda Todd, the teenage victim of cyber bullying. A group of people turned the capacity of social media to torment individuals on its head, forming virtual posses, trolling sites like Facebook and outing would-be anonymous posters.

In 2008, David Lyon wrote about our surveillance society. With the advent of social media, he rightly contends, the key purveyor of our personal information has shifted from government institutions to corporations. But the events of the past week leads me to believe that our surveillance society is shifting yet again. Oh, the machine is still chugging away, collecting and manipulating our personal information for government and corporations alike. But it appears that those who have been surveiled have begun manipulating the machine themselves. Rightly or wrongly.  

Image courtesy of Victor Habbick, freedigitalphotos.net