Full-Stack Media Ecology

A McLuhan-syntonic Approach to Computer Literacy: Toppling the Pillars of Cyberspace

On June 29th, 2019, I delivered the above presentation to the Media Ecology Association at their 20th Annual Convention in Toronto, on the U of T campus, based on this paper. Learn more about the convention at mediaethics.ca. Attempts to move the paper toward a more finalized form have resulted in sprawling additions which will require much work, however I hope the draft below suffices to entertain curiosity piqued by the video. 🙂 – Clinton, 08/02/19

This paper is undergoing a significant re-write, not least to address some typos and add more sources. Please consider it a draft in its present form. – Clinton, 02/21/19

Cyberspace is a fictional sensory environment with a traceable history. It is formally defined — much like the Euclidean space which Wyndham Lewis feared losing, and which Marshall McLuhan announced obsolete thirty years later. Its origins lie in the mid-1960s with the programming language Simula, which standardized the now-ubiquitous object-oriented approach toward computer programming (Rheingold, 238). Combined with the file-systems of magnetic data storage devices and graphic user interfaces, cyberspace has become synonymous with computing and media as a whole. Examinations of cyberspace emphasize the fantastic and unreal nature of the medium, but seldom puncture through to the realities of computing itself.
The applied methods of Marshall McLuhan promise great exploratory and explanatory power in today’s media environment if, and only if, the precise nature of digital technology as machines and as media can be acknowledged in a way commensurate to all various perceptions of them. Unlike analogue media, whose inner-workings are discrete, computing devices employ the universal Turing Machine concept which renders their operations evasive of straightforward explanation. Modern media literacy absolutely demands some basic form of full-stack computer literacy, without any permissible exceptions or objections. The alternative is control over the programmable, invisible environment being ceded to an arbitrary, self-selected few who are granted the power of technical determination over the many.
After demonstrating a model of simple, full-stack computer literacy this paper will offer some exploratory probes into the nature of media and its effects in the style of Marshall McLuhan, with the aim of putting cyberspace in its proper place and revealing the hidden ground it has covered up.

1. Computer Content vs. Media Content

In the McLuhanesque sense, the content of a medium is highly visible or noticeable, to the extent that it distracts attention from the medium itself. A new medium constitutes an environment which is only subliminally processed by (and so, in turn, is subliminally processing) the beholder who is consciously fixated on the content. The content is easy to perceive because it presents itself as something old and familiar.

McLuhan posited that the electronic media of his time offered too much too quickly, thwarting conscious linear consideration. He claimed that human perception of them called upon in-depth, right-brain processing via mythic forms. We end up “feeling” electronic media and dynamically “making” our perception of it instead of detachedly considering it through a left-brain conceptual model which “matches” it. Since his time analogue media has been largely swallowed up by digital computers, necessitating a fresh analysis of electronic media capable of encapsulating the nature of this new medium.

What is the content of a computer? One answer might be the hardware inside and its specifications. A more likely answer would be the files on the hard drive, which is to say the content of the storage device. Still another answer might involve the enumeration of the software environments providing useful applications to a user’s daily or professional life. By using Marshall McLuhan’s approach toward media studies, we can develop a nuanced, technically accurate to considering computers as a medium. First, we must re-cognize the development of computer interfaces themselves, from direct representation of hardware states up to the erection of simulations of cyberspace. This can be done by roughly tracing the historical development of the personal microcomputer, beginning in 1975.

2. Definition of computers and interface

At the lowest level, the content of a computer is the state of each flip-flop in the RAM hardware components, signifying either a 1 or a 0 in the abstract binary code. In computer science, RAM is invariably conceptualized as a memory-space of addressable regions of bytes. When a computer is running,  the CPU defaults to moving sequentially through each addressable byte in memory, rather like a boardgame piece or the reader of a Choose Your Own Adventure Novel. Occasionally the CPU will encounter a novel instruction at a memory address. Perhaps it hits an instruction to jump elsewhere in RAM to work on a subtask and return with an appropriate response, or to circle about in a loop, or to decide a heading on a branching path after a logical decision based on a mathematical operation. Sometimes it is told to go to access special addresses which correspond to hardware such as the keyboard, on-screen characters, mouse, or storage devices. This is the the most straight-forward, low-level way of conceptualizing the operation of computer hardware (disregarding micro-code—the programming within a CPU—, which is below the scope of this article).

This convoluted linearity is mirrored by the procedural programming style, in which the programmer writes the directions which the CPU will follow. The computer processor itself, in either a direct or abstracted and simplified way, is conceived as the sole agent within the computer, performing instructions as-written line by line. In the language of Alan Turing’s definition for a universal computing device, his Turing Machine, the RAM serves as the endless memory “tape” whose “cells” (memory addresses) of “symbols” (bytes) are read by the CPU in the role of “head.”

Programming languages are classified from low to high, depending on their level of abstraction up and away from this basic Turing Machine functionality. Machine language and its corresponding assembly language are the most low-level, meaning they directly represent the CPU’s actual operation. For the sake of this paper, they will be grouped with somewhat higher-level procedural programming languages. These afford the programmer many useful shortcuts, pre-defined functions and variables, and rudimentary use of natural language. Yet since procedural programs maintain the perspective of the CPU as the sole-agent in the computer, being instructed through a program-flow of convolutions of linear structure line-by-line, we will consider all of these to be low-level.

3. Interfaces as medium

Input into large, early, institutional computers from the 40s through to the 70s had been done via physical rewiring, byte-by-byte toggle switches, paper telegraph tape, paper punch cards, and teletype or typewriter-style keyboards. Output was through blinking lights, paper print-outs, and various cathode-ray tubes such as oscilloscopes and television screens. The recapitulation of this procession at the microcomputer level was very quick. The toggle switches and blinking lights of the MITS Altair 8800 in allowed hobbyists in 1975 to enter machine-code into the RAM byte by byte and inspect the contents. By only 1977 the Apple II, Commodore PET, and Radio Shack TRS-80 provided home and business users with affordable use of a keyboard and television interface to an interactive, procedural BASIC-language prompt.

Marshall McLuhan warned that the content of a medium, the object of focus, was always a distraction from the medium itself. Low-level software inheres tightly to the hardware of the machine in a complementary way, rendering the hardware naked for use. The computer is, in this case, a highly-intricate mechanical machine and its software is merely a highly-mutable part of its mechanism and not “content” in the media-sense of the term. Content must be whatever meaningful thing the keyboard and screen – the input and output devices – are communicating with the user about via interface.

In a media-studies sense, content is perception, Even though we might know something is in a computer, it is not “content” until it is striking our raw senses.

4. Code in execution and storage

Putting a program into RAM requires the computer to be turned on, and for the CPU to be “idling” until it can be set on its path through the memory addresses containing the software to be executed. The MITS Altair 8800 had a “STOP/RUN” toggle switch to change between these two modes in their purest, most direct form. Afterwards microcomputers would come with a bare-minimum of programming pre-written into a ROM chip soldered onto the motherboard, automatically be loaded into RAM the moment it powered on. On the Apple II, PET, and TRS-80 this ROM chip contain an interactive BASIC programming language environment which, in execution, would present an on-screen prompt in which the user could type individual BASIC instructions for instant execution, or complete BASIC programs.

This back and forth between source code and execution reveals or closely mimics the interplay between RAM and CPU. For this reason, I would say that these first generation microcomputers present the computer as content of its own interface. To program a computer at a low-level is to have perception of the dynamics of its deepest inner-workings. This might be an exaggeration, but this is as purely transparent as it gets for commercially-produced, mass-market computers. The RAM is clearly presented as a space to be filled with bytes from the keyboard; instructions written in the convoluted linear procedural style of the operations of a CPU. The computer screen is like an open box with no false-bottoms, revealing all contents.

RAM is volatile, which means it is cleared when power is turned off. Saving computer data between power-cycles requires computer storage. Early microcomputers would offer BASIC commands to load and save the bytes in RAM directly onto audio-cassette. Once a program was typed in by hand the user could press record on their tape-player and “play” the RAM to be recorded. Likewise, they could play the tape and have the computer fill up its RAM from what was on the tape.

A more complex, discontinuous form of persistant storage necessitates an equally complex paradigm for human apprehension. This paradigm constitutes our first major pillar of cyberspace. It comes in the form of a metaphor of the old thing within the new: that of the filing cabinet full of paper files (Kenner 223). Despite being a metaphorical fiction, computer disk file systems are so ubiquitous today that the idea of digital files seems absolutely foundational to computing.

Digital Research Inc.’s CP/M Disk Operating System was the defacto standard in microcomputers until supplanted by Microsoft’s clone MS-DOS in the early 80s. Microcomputers with floppy disks or hard disks would first boot from ROM, which in turn would locate files on the storage and load them into RAM until finally booting up to a DOS prompt. Instead of presenting the contents of RAM, this interface places the user within a the space of the disk, full of files. Each is a space measured in bytes, but what one types at the input prompt are not instructions for the CPU into for RAM, but rather commands which manipulate files. Anything you want load into RAM will have to be made into a file first. Programs are files, documents are files, even the bootable environment itself exists on a spinning magnetic platter, addressable by an alphanumeric name and type. Instead of BASIC loading from ROM, this bootable environment on a disk, as a non-permanent collection of files, ushered into the microcomputer world the notion of the changeable, upgradable operating system.

5. OOP and cyberspace

In procedural programming languages the CPU, in either a direct or abstract and simplified way, is the sole agent within the computer performing instructions as-written line by line. Object-oriented programming (OOP) was a radical departure from that paradigm, facilitating the conceptualization of the computer as a space to be populated with virtual entities. Its origins lie in the mid-1960s with a programming language developed in Norway, descriptively called Simula.

The objects in an object-oriented program are instantiated from formally declared specifications, often related in family-like hierarchies and derivations, sharing and inheriting properties. A programmer defines an type of object and then call them into existence, as though by magic. Linear procedures are nested within—and distributed among—the objects, instead of constituting the totality of the program. All the objects then act as discrete agents imagined to exist in simultaneity, always on the ready to play their role in the program when called upon.

Much like the laws of physics which govern our physical universe, there are heavy constraints upon the programming style in OOP languages, the enforcement of which lead to its features. These constraints ensure that all the objects behave in ways which facilitate the orderly function of the simulation. (Kay 238) For instance, in a traditional, procedural program the CPU is the agent and thus has full governance over any part of the RAM. In OOP each object has agency, and so the functions permissible to it are constrained to operating within its own local scope, preventing interference with other objects whose autonomy must be maintained. All interactions between objects must be through proper channels, clearly defined and adhered to. It is through this sort of explicit segmentation and structuring that the stability of an object-oriented program is maintained. From this stability, computer programmers have become far more complex while remaining comprehensible. The programmer indirectly programs the computer by programming each object in compliance in accordance with the “physics” of the simulation.

Of course, it’s all the same to the CPU. Languages are an abstraction afforded to humans. At the low-level, all programs are executed in the same procedural manner. Object-oriented programs are re-cast, or compiled into a procedural form for execution, albeit a heavily convoluted and unintuitive one. But the approach a programmer takes to conceptualizing the program is radically different between the two paradigms. The difference from the perspective of the CPU is analogous to that between plain, continuous, logical prose and discontinuous, symbolic poetry. In the first case, the CPU basically follows the procedures as conceived and written. In the second, it is jumping all over the place in accordance with a compiler’s interpretation of how a particular human mind conceived the complex, living interrelation of a mosaic of resonating objects. With OOP, the machine is contorted to suit the human nature of the programmer’s imagination, instead of demanding the programmer fit ideas to suit the machine.

Object oriented programming languages are the second major pillar of cyberspace. As magical as computers might feel in-and-of themselves, when explicitly programming the CPU and RAM with a procedural language one is directly working with fundamental hardware instantiations of the components of a Turing Machine, and one feels grounded within it. In comparison, in object oriented languages objects are simply declared into existence, with any desired properties, with a few magic words. Instead of being a programmer of a computer, one gets the feeling of playing god over a tiny interactive universe which just so happens to be accessed through a computer. Or, at least, that is the fantastic illusion which gets the computer itself “out of the way” of concentrating on its practical applications.

6. Spatial Desktop metaphor.

The first two pillars of cyberspace, filesystems and object-oriented programming, were in wide-usage on institutional machines during the 1970s. During this time the fledgling microcomputer market was just gaining traction. In the development labs at Xerox Palo Alto Research Center, the object oriented programming was being combined with computer graphics to produce the third pillar of cyberspace: the spatial desktop metaphor.

The spatial desktop metaphor provided a graphical paradigm for running and manipulating applications – the one we are familiar with today. Graphical elements were married to all the abstract objects which could be instantiated in the object oriented programming language. In the case of Xerox this was the SMALLTALK language by Alan Kay, directly inspired by Simula (Kay 231). The use of a mouse allowed the user to open and close files, which were now iconic pictures in addition to file names. Files were now considered documents, and were portrayed as sitting atop of a virtual desk, or within virtual Manila folders. The screen itself was now a “desktop,” as though a seamless extension of the desk it the microcomputer system presumably sat upon. Menus and browsers and all the various other fundamentals of computers were innovated here, soon to inspire Steve Jobs of Apple Computers. Apple brought the desktop metaphor to market with the Macintosh computer in 1983. A few years afterwards IBM and Microsoft developed a comparable standardized graphical paradigm called Common User Access, or CUA.

When the Apple Macintosh commoditized cyberspace for the mass end-user market, object-oriented programming was rent from the graphical user interface, resulting in a completely non-programmable computer. For the first few years at market, Macintosh programs could not even be professionally developed on Macintosh, instead necessitating the purchase of the more expensive Lisa for software development. Unfortunately, this omission of first-class programming-features became standard for end-users, and microcomputer purchasers thereafter were able to internalize two-thirds of cyberspace: the filesystem and graphical user interface. What they did not internalize was the object-oriented programming pillar which offered any control over the substance of the simulation, or ability to create first-class applications within it. The popular programming language BASIC had no place within the new paradigm and was quickly obsolesced (outside of the niche specialist market of business spreadsheets).

By the late 1980s the advice to new microcomputer users was unanimous: you don’t need to learn programming; that’s only for old, obsolete models. Nobody used BASIC any-more, and nothing replaced it for mass use. Professional programming environments for Windows and Macintosh were available, but were expensively priced. Apple eventually bundled powerful drag-and-drop hypermedia creation software called HyperCard with each Macintosh, but the included programming functionality was not emphasized, and its card “stacks” were rudimentary compared to actual computer applications.

7. Cyberspace

The I/O devices of modern computers, which allow representations of the inner state of the computer, were thus slowly morphed into conveyors of a content completely divorced and abstracted from the functioning of hardware, in favour of a simulated environment known as cyberspace. The key feature of cyberspace is that one can operate entirely within it, master it, find in it infinite possibilities — and yet still have no appreciable need to acknowledge or sense the reality of the actual hardware one purports to be “using”. Let’s summarize how this complete lifting away from physical space, within the I/O devices, was achieved.

The first pillar of cyberspace, the file system, was erected when RAM was supplanted as the central concept of “space” in a computer by physical storage. Instead of bytes in RAM taken up by program code, bytes on the magnetic disk became the primary space in which the user worked, such as in answer to the question, “How much space is in your computer?” RAM space was thus relegated to a relative statistic related to machine performance. And “files” became the fundamental unit and currency for virtually all computer operations. We exchange files, attach files, copy files, save files, print files, create files, find files, edit files, delete files, back-up files, rename files, and even burn them. It’s possible to imagine a young person first encountering a filing cabinet and drawing on a computer as a useful metaphor to make sense of its purpose.

The second pillar of cyberspace went up when programming was abstracted away from the basic procedural way a CPU actually functions, via convoluted linear movement through RAM space. Different programming language styles, such as OOP, innovated paradigms more useful to abstract human conceptualization and modelling for the sake of simulation. This development has allowed for greatly increased complexity of applications while burying system functions and the computer hardware itself further down the “stack” and out of mind. In the mass-market, commercially-sold operating systems most popular today, this pillar is omitted by default, leaving these machines as purely platforms for software consumption. Programming is today regarded as a specialist endeavour, not the central essential purpose of computer ownership which it was before ubiquitous software commodification. Only lesser-known Free software environments, such as GNU/Linux, guarantee the computer user total power over the simulated environment presented to them and the ability to drill down toward the actual hardware through legally-mandated source code availability.

The final pillar is the spatial desktop metaphor comprising the standardized graphical user interface. By extending the hand into a manipulator for on-screen representations of ideal virtual objects, applications and files were given an embodied existence approaching that of physical objects. This is a visual portrayal of the dynamics imagined by programmers while writing in object-oriented languages. Graphical programs are overwhelmingly written in OOP languages because of this correspondence. This skeuomorphic graphical user interface, the third pillar of cyberspace, is what turns the imaginary nature of the first two into a form which strikes the raw senses. By offering pictorial and iconic presentation, simulated physical surfaces, and immediate feedback, the resonant interval between user and abstract, fictional space is closed with staggeringly intuitive interface across the gap.

8. The Content of Cyberspace

Now that we can perceive cyberspace, as three main pillars erected upon the foundation of the CPU and RAM, the fundamental “head” and “tape” of the Turing Machine, we can consider what constitutes the “content” of a computer.

In the media-studies sense, the content must be what strikes the senses first, conveying immediate impressions to the beholder’s senses. Opening up the box and looking at all the microchips won’t reveal any content; it is the I/O hardware which provides interface. This makes the I/O devices themselves the medium. Therefore the screen, keyboard, and mouse presumably each have effects in-and-of themselves, such as light-through, mosaic quality of television, the impersonality of the typewriter (McLuhan 1970, pp 102) , and, perhaps, the endowment of meaning to subtle hand gestures (not to mention carpal tunnel syndrome).

When the low-level operation of the computer is being interfaced with by a programmer, then the computer is the content of the screen and keyboard. It is the state of the actual hardware being represented in appropriate form, and so the symbols the human operator manipulates tightly cohere with the microscopic and electrical events actual taking place in computer itself. Only in this scenario is the computer itself, the Turing Machine, making a direct sensory impact upon the user.

In all other cases, the content of a computer is cyberspace. Within that fictional, interactive, abstract world of metaphorical objects and artistically architectured contexts, the bits and bytes which make up the fundamental content of computers are, virtually without exception, encapsulated in structures called files. Files form the ground of cyberspace; the membrane, substance, or ground of the imaginary interactive world. Everything computers are advertised to do can be done without breaking the illusion of the file-system environment, since absolutely everything in cyberspace consists of files. Program files run the operating system, driver files work the hardware, and user files make up the content—filesystems make escape from cyberspace exceptionally difficult.

Cyberspace only exists in the human imagination, like the tobacco paraphernalia in the splotches of paint we call Magritte’s Pipe. What actually does exist—the CPU and RAM and other necessary hardware—are not presented to the senses. They are a component of the actual medium which is primarily the interface of the I/O hardware. The computer itself is like a motor added to the horse-drawn carriage of television. The only time which CPU and RAM are noticeable is in their performance of orchestrating the cyberspace simulation—as when it slows down, stutters, or gets too hot to hold.

Having traced a sufficiently-nuanced perception of computers as media, we can now consider their effects in the environment. As electronic media, they include the user in depth. Given the interactive nature of the medium, unlike TV, the style of the user’s computer literacy will determine which manner of all-at-onceness, or processing, will be presented to the senses. Let us consider some initial probes down two orthogonal dimensions of depth which might be involving to the user.

9. The Depth of Cyberspace

The first depth is that of McLuhan’s analogue electronic media: the depth of content entailing all the past and all of the planet in a simultaneous happening.  Given the interactivity of computer content, it is an uncannily literal instantiation of McLuhan’s much-heralded programmable environment.

This depth is through the terrain of cyberspace and its files and applications and windows of content. Within cyberspace all disparate media has fused into rear-view multimedia, below which reads “Objects in mirror are less real than they appear.” When the telephone entered cyberspace it threw in, as a bonus, tiny forms of all 20th century media mangled to fit into a new-fangled retrieval of the transistor radio. You can carry a video editing suite in your clutch, or drop your multi-track recording studio into the loo.

All symbolic forms that can be contrived by marketer poets, abstract architects, or anonymous exhibitionists saturate this ever-fluxing interior landscape. The bands of multimedia content, perceived in mythic simultaneity, overwhelm and paralyze the user who, in order to be productive at all, must stake out one small corner of life to operate in, while in perpetual awareness of the limitless potential connection and access being neglected. Companies and developers—accountable to stockholders, regulations, market demands, and organized consumer revolts—work to maintain control. They process and control different aspects of the experience in the virtual space on non-negotiable terms users agree to upfront and unread. In a few years all is rendered “obsolete”, owing changes in the fiction which is out of the alleged device “owners” hands, and the whole pseudo-cycle starts again. When computing devices themselves are the object of attention, their depth is the pop-mythology of computer history—the gadgets are the products of the shrewdness of Bill Gates in business, or the uncompromising aesthetic vision of Steve Jobs.

10. The Depth of the Stack

The second depth is that of the software itself. It is the depth of perception of the physical device as it is. This requires toppling the pillars of cyberspace, piercing its veils, and interfacing down the computer stack, through to the underlying software and hardware. It awareness of the nature of the physical object in front of you—if the cathode ray tube was as complex as a modern computers then McLuhan would have correspondingly written far more poetry about television’s scanning finger and shutterless images. Every element and layer of cyberspace has source code, and source code produces software which runs, harmoniously flattened, on the same hardware. GNU/Linux users have full control over their machines, and comprise over 2% of desktop computer operating systems (“Operating System Market Share”). By contrast Windows, MacOS, iOS, Android, and Chrome OS are to be used as-provided (barring radical measures to reclaim ownership).

The commitment to swim upstream and fully own and control computers themselves, rather than use them as pre-fab cyberspace platforms, is necessary to cultivate this depth perception. This dimension is probed and made into sense by computer specialists (such as developers or cyber-security experts), hobbyists, hackers, all those rebelling from corporate computing—and all those looking to profit from it. This dimension constitutes the anti-environment to the consumerist elements of the first. It’s the store-brand, home-grown, co-op, thrift-shop life. It is organic, chaotic, anarchic, largely unmeasurable, mostly free in both price and in allowances for possibility, and has been existent for as long as computer hobbyists began in the 1970s with the first microcomputers. It is McLuhan’s land of the drop-out, the fertile area from which sprang up the mythic giants of computer company lore (Kenner 224). It is spoken of in their autobiographies and re-enacted in dramatic biopics, but which has remained ignored by mass media in every other way. Any innovations which occur here remain until they can be monetized within the first.

11. Technological Determinism

The nature of technological determinism observed by McLuhan has been largely misunderstood. It has been widely interpreted as meaning that technology inevitably brings on social changes beyond possible control (Frye 20-21). Discussing the unperceived, and hence deterministic, effects of media is portrayed as facilitating the justification of  the excesses of technological capitalism by admitting its potential as force for social change (Bolter & Grusin 76). In fact, it was the ignorance to the effects of media which lead to its role in determining fate (McLuhan 1964, 304). McLuhan’s first-hand observation of ubiquitous ignorance of media, and hence media’s effects, lead to his life-long insistence upon rectifying it through mass education, so as to allow human consciousness to persevere over technological determinism (McLuhan 1970, 135).

Everything about modern computer interfaces leads computer users toward a focus on the content of cyberspace. This content is the digital version of all of our 20th century media. The high-level cyberspace and the low-level computer itself as content of computer I/O devices is the largely ignored environment which is sutured into the perceptual fabric of physical reality. Owing the absence of the programming pillar from consumer cyberspace, even when the environment is examined, control over it is ceded to manufacturers, tech firms, and advertising companies.

Computers, which used to require a lot of learning, are now easier and easier to use. Everything which once hinted at computers requiring technical knowledge, such as the “https://” protocol specifier preceding web URLs, is being hidden away as too confusing. To be easy there must be fewer ways of doing things. Constraints on the virtual space come in, just as they do in OOP. Fewer ways to do things means fewer ways for things to go wrong.

As the computer and cyberspace environment recedes more and more out of perception, through the endless busy-work and upgrade-bait of optimization, the more content becomes the sole focus of attention. Increased reliability, consistency, and ease of use increases the comfort and trust in the stability and permanence of the digital environment containing that content. The high-level, simulated objects present themselves to the senses as more and more real as the volatility of RAM becomes less and less of an experience.

Take, as an initial probe, the all-important non-existence of so-called computer “files.” They do not exist, but their certain existence is reified to our senses every day. Merely knowing that they don’t exist will not change the sense perception which has been trained by years of experience. Truly conscious perception and intuitive feeling for all the ensuing ramifications of the fictional nature of files will arise only when the simulation fails. Somewhere, right now, the slow, entropic diffusing of spots of burnt ink on a CD-R is being neglected owing to misapprehended 80s marketing rhetoric about the permanence of optical media. Elsewhere, so-called photographs are being uploaded to a social media site—which will shrink and compress to little-more than thumbnail quality—while the priceless high-resolution originals are deleted to free up space. And elsewhere-still, someone is hunting around for the “reset password” link for the encrypted drive partition their nephew set up for them. These events, while tragic, are key to computer literacy. Bless our blue screens for shocking our senses from the dangerous illusion of a world which does not exist. The painful tear in the mental merger of the real and the cyberspace is affected when the fictional objectivity of virtual objects is shattered. RAM is volatile in a way physical objects will never be. However, as machine reliability improves, and as molly-coddling interfaces arrise prevent permanent data loss, these opportunities for learning the difference disappear.

The classic humanist conceptions of the individual was created by the phonetic alphabet and books, according to McLuhan. Electronic media was eroding that foundation during his lifetime, and now posthuman scholarship seeks sustainability for non-unitary, nomadic, post-rational forms of identity for the third millennium (Braidotti 190). In consideration of the post-human subject, agency can be afforded to machines (43) which can embody and become embodied within human identities (104)  A more exact recipe for technological determinism could not be developed! A proper computer literacy, which dispels the ungrounded nature of cyberspace, is needed so as to facilitate the common perception of computers at their demonstrably-real level of existence. Meanwhile, within posthuman discourses, many people with subjectivities so expanded and merged idiosyncratically with technology beyond the classical, Kantian sense are already considered posthuman themselves.

And are we all not already halfway there? We consider text documents and photographs to share a single identity between their digital and analogue instantiations. Somehow, my résumé as series of pixels on a screen, rendered by software from stored bits and bytes, shares a single identiy with my resume as a piece of paper. Our casual language lacks any distinction whatsoever. This is the media-illiteracy of the posthuman in a digital world. The reckless abandon with which mediated forms are sensed with equal concreteness with physical forms—for want of any deep sense of computers-as-they-are; for want of computer-literacy—leads to the ideosyncretic stitching of the virtual and the real. As cyberspace carries more and more of reality itself for its content, all the more that what is invisible to the post-human about their environment comes to play an unconscious determining factor in their lives.

Computer operations are microscopic and occur at unimaginable speeds, yet they do take place in physical reality. To know that conceptually, to spout of a computers specifications as written on the side of the box, is one thing. Training one’s senses to perceive it is another, and is ever more difficult in a world where the simulation grows ever more verisimilar. It is very difficult to train yourself to sense what a computer is doing when they are optimizing its very existence out of our senses. This while the superficial appearance  of the interface becomes ever more attractive, easy, popular, and fun. The seductions of cyberspace will only increase in amplitude, and without mass knowledge of the programmable Turing Machine will become more a fiction to which lip-service is paid than an experiential reality. As most look at the content, the medium will be something minded by only the few who can escape the maelstrom.

12. The Computer Literate

What is the situation of the computer-literate? They are vastly out-numbered by the post-human (who, in a post-individual way, share a common identity) and thus dwell in the anti-environment to society, which is present. Their opinions on computers must either undergo radical simplification to be communicated, or stay confined to discourse within the computer-literate subculture. They own the internet, either by working for or running online companies, or by running circles around casual social media users.

In the 80s and 90s they had a thriving analogue media market, including books, magazines, television programs, television channels, and even held meet-ups called user-groups where people socialized and taught computer literacy to interested newcomers (Kenner 223). This was all possible because computers were distinct from all other technologies and had a learning curve. As the internet was popularized in the 90s, most of this social activity moved online.

Rules regarding children are an ideal case study of the how the internet felt as a strange, new technology in the world. School curricula were created to teach children responsible internet usage in the 90s. Internet safety guidelines were for parents and schools teachers. The cardinal rule was to never give anybody your name, location, or phone number. There were tips like keeping the home computer in a common area so that kids can always be monitored, keeping them out of unmoderated chat rooms, and talking with them about their online activities, just like one would the events during the school. The dangerous aura which internetworked cyberspace radiated as a new, complicated, vast frontier—an unknown variable—provided the impetus for all caution. The sense of risk in letting your child connect to god-knows-who-or-what was acutely felt as a basic common sense.

Another major factor was privacy. The tracking of user activity online was an abominable violation of privacy. Browser cookies were regularly deleted as a natural response to the commonly sensed violation of home and property that this invasive activity was felt to be.

And yet, somehow, computer literacy is become worse today. As computers became easy to use, the content of cyberspace took over and the computer disappeared from public perception. Tablets and smart-devices have replaced them, and simultaneously repealed all laws of safe, responsible behaviour online. Children are indoctrinated into belief systems by strangers within months. The internet is no longer the internet: it has been rebranded as the friendly-sounding social media. The vast, distributed web has been drawn into a few centralized websites where all of the world’s notable people are gathered like fish in a barrel to be shot at with vitriolic hate and conspiracies, warranted or not. Reputations are permanently destroyed for a global audience, who then demand humiliating acts of contrition for an unlikely chance of redemption. In other words: regular internet happenings, except upscaled and with real identities instead of anonymous avatars, and with terrible, anonymous moderators. And all while the personalities of billions of people are being intimately profiled for future subliminal manipulation by digital shadows who will stalk them through cyberspace and meatspace for the rest of their lives.

What can explain this deliberately induced regression from decades of wisdom which had been accruing since the 70s? Easy-to-use cyberspace, irresponsibly thrown without warning into the hands of the media-illiterate, has undone the hard-won wisdom of the first generations of computer owners.

Content has eclipsed the most powerful, manipulative medium we have ever devised. Where does that leave the computer literate, those outside of the simulation? Feeling powerful, perhaps:

The social-impact of simulation – the central property of computing – must also be considered. First, as with language, the computer user has a strong motivation to emphasize the similarity between simulation and experience and to ignore the great distances that symbols interpose between models and the real world. Feelings of power and a narcissistic fascination with the image of oneself reflected back from the machine are common. (Kay 244)

Alan Kay, who erected the second and third pillar of cyberspace, warns us here about the effects of private computing. Today one’s actions in cyberspace, with all the affect of a simulation on a screen, can be felt as raw experience by real, living people on the receiving end!

The disturbances of caused by unbalanced power dynamics in cyberspace can not be levelled or equalized by rules, laws or regulations. Not by appeals to ethics or morals, to common causes or common enemies. Not by shaming or praising. Nothing in the human sphere of conception can change things until the medium itself, as the hidden environment that it is, is addressed. Those with a sense of the medium, the computer literate, are not getting any dumber. Thus, it is incumbent upon everyone else to grow smarter. Every level of animosity in public discourse today is brightly illuminated when considering disparities of media literacy and computer literacy between the combatants. That’s where progress will be found.

 

Works Cited

Bolter, J. David, and Grusin, Richard. Remediation: Understanding New Media. Cambridge: MIT Press, 1999. Print.

Braidotti, Rosi. The Posthuman. Cambridge, UK: Polity Press, 2013. Print.

Frye, Northrop. “The Modern Century.” 1967. Northrop Frye on Modern Culture. Ed. Jan Gorak. Toronto: University of Toronto Press, 2003. 3-47. Print.

Kay, Alan. “Microelectronics and the Personal Computer.” 1977 Scientific American

Kenner, Hugh. “McLuhan Redux.” 1984. Mazes: essays. San Francisco: North Point Press, 1989. 223-229. Print.

McLuhan, Marshall. Counterblast. London: Rapp & Whiting Limited, 1970. Print.

McLuhan, Marshall. Understanding Media. New York: McGraw-Hill, 1964. Print.

Rheingold, Howard. Tools for Thought. MIT Press, 2000.

“Operating System Market Share.”Net Marketshare. Dec. 2018, https://www.netmarketshare.com/operating-system-market-share.aspx. Accessed 5 Jan 2019.

 

Bonus snippits!

The anti-environment of the computer literate is maintained through recognizing devices which are no longer called computers as computers.

With full perception of the coercive, invasive environment which consumer cyberspace has become, the post-human takes on the P.T. Barnum’s proverbial sucker to the perception of the computer literate. They are poor souls incapable of comprehending the nature of their environment, thrashing violently against a world which refuses to present reality to their senses. The paranoid might see them as slaves to mechanisms of control orchestrated in a grand conspiracy. Or they are piggy-banks waiting to be cracked open along their advertised fault lines and emptied for commercial success. There is a fortune to be had, or

The medium of cyberspace distorts the humanity of those who are using it as a window onto the world, and onto the nature of people.

But in the first decade of the 21st century, slowly the lines were blurred beyond all distinction. Today’s world of social media, personal exhibitionism, and artificially “slow” devices with clockspeed measured in gigahertz is a farce. The wild, dangerous internet which terrified parents of the 90s was smuggled unrestricted into everybody’s pockets through the combination cellular telephone “upgrades” and free wifi. The freakshow of poorly-moderated internet communities was rebranded with the friendly sounding name “social media” and advertisement exhorted everybody climb aboard and share every detail of their lives with the world.

 

1 Comment

  1. Viviane Maguire

    Thank you. This is very thought provoking
    Regards Viviane

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2024 Concerned Netizen

Theme by Anders NorenUp ↑