Why Is This Hill So Steep?
E-books: The simple idea, sensible concept and understandable product that took twenty years to pull off
by Steve Jordan
Table of Contents
1: Traditional publishing—The Castle mentality
2: Enter Windows—The modern computer era begets the Format Wars
3: Printing—The shotgun marriage of paper and computers
4: The Web—The wild card no one knew was in the deck
6: Computers’ mid-life crisis—The PDA, cellphone and netbook threaten the marriage
7: The programmers—The ME generation
8: The literati—The peasants are revolting! (You can say that again.)
9: The anarchists—We will bury you.
10: The consumer—Tear down this wall!
11: Luddites and fanatics—What’s wrong with just reading books?
12: The accountants—How much is an electron?
14: Copyright—From here to eternity
15: Music—No, we are the future
16: Apple—iTunes to the rescue
17: Amazon.com—The game-changer
18: The amateur authors—Gonna fly now
19: The pro authors—Fight or flight
20: The environment—The green movement vs. good ol’ paper
21: The technology—My reader’s better than yours! (Nyah!)
22: The marketers—Ads about nothing?
23: The literature—Prisoner (casualty?) of war
24: The gurus—When you can snatch the e-book from my hand…
Epilogue: The future—Where are my flying books?
Why Is This Hill So Steep? E-book edition is copyright ©2009 Steven Jordan. All rights reserved. This e-book is intended for private use only. Please do not reproduce this book for the purposes of mass distribution without the express permission of Steven Jordan. After all, he’s just a guy trying to make a few bucks. What, you don’t think people can afford a few lousy bucks for a book? What are you, an anarchist or something?
The opinions expressed in this book are those of Steven Jordan. Every attempt has been made to portray the entities described within this book honestly and fairly (even the dishonest and unfair ones). This book is personal commentary and general historical reference work, but does not provide specific figures or names beyond those required to accurately describe the information contained herein. Any questions about this manuscript should be directed to the author at SteveJordanBooks.com.
Blurb:
E-books. Electronic. Books.
Sounds like a simple concept, doesn’t it? So why has this simple concept taken so long to develop, when other forms of digital commerce and media have become modern sensations? Because of a series of events and forces acting against it that would seem too improbable to believe in a dime novel.
(Or maybe an e-book.)
If you want to know where e-books are going, it will help to know where e-books have been, and why they still seem to track mud on the floors wherever they go. This book sheds light on a perfect storm of publishers, corporations, professionals, amateurs, dogmas, movements and beliefs, all of which worked either unintentionally or deliberately to forestall the coming of the e-book for over two decades. And it details which of these elements is still going strong and continuing to hold back e-books. At last, you’ll learn how badly e-books have had the cards stacked against them, and why.
Introduction
E-books. Electronic. Books.
It actually sounds very straightforward, doesn’t it? Especially in the opening decade of the twenty-first century, after we’ve spent half a century learning the intricacies of the computer age, twenty years getting used to computers on our own desks, ten years getting used to the computers in our pockets, goggling (not “googling”) at the digital images computers are injecting into our televisions and movies, marveling at the reality of instantaneous global communications, googling (not “goggling”) the information suddenly at our fingertips, and watching over our shoulders for the computers that will be telling us what to do, picking up after us, and reporting on us when we are naughty. With all that going on, how strange can electronic books possibly be?
But in one of those strange-but-true stories that no one would ever have believed if they hadn’t witnessed it with their own eyes, e-books have turned out to be the poster child for everything that’s wrong with the world’s transition to a digital future. A very simple concept for a very basic item, letters arranged to communicate ideas, has struggled for no less than twenty years without achieving a popularity and ease of use that every other form of electronic media has managed to achieve in half the time or less.
And it’s not as if no one has seriously tried to shift literature into the digital era. In fact, everything from world-dominating organizations to basement programmers have tried, and all have… well, if not failed per se, their efforts have to date been less than successful.
Now, more authors, publishers, companies, hardware manufacturers, software programmers and consumers are being attracted to e-books than ever before. Further, experts, scholars, technicians and businesspeople have watched the industry closely, and in almost every case have thrown in their comments, suggestions and recommendations, with the idea of helping to get e-books back on track.
All of this effort… all of this attention… all of this potential… has carried on for the past twenty years, without real success. Which raises the question: What in the name of Gutenberg has been going so wrong?
~
To begin with, we need to have a good idea of what an e-book is… a seemingly trivial thing, but not a trivial point, because the events of the past twenty years have managed to muddy even that essential picture in many people’s minds. Today it can be amazing how many different opinions there are of the essential nature of e-books.
An e-book is, essentially, an electronic file designed to be processed through an electronic device. The electronic file contains a document, which can be as simple as the raw text, or made much more advanced through additional code designed to provide formatting to the text… font type, size and color, italics, bolding, headers, etc. It can also include additional non-text elements, like graphics, sounds, even video, and special programs that can imbue the document with various “special effects” like color-changing objects, animated elements, etc. Generally speaking, though, an e-book is much like a printed book in that it is expected to hold text and graphics, like a printed book, and little or nothing else.
This definition encompasses a wide range of electronic files that are considered e-books. A basic ASCII-based text file (known by the “.txt” extension at the end of the name) is an e-book. A document created in the Microsoft Word format is an e-book. A document saved in the Adobe InDesign format is an e-book. Basically, any electronic document encompassing text, maybe some graphics, and occasionally other elements, are e-books by broad definition.
Today, however, many people in and out of the industry see e-books as a more specialized kind of electronic file, designed to be accessed only through specialized software that is designed to optimize the reading experience for the consumer. The point is to create a file that the consumer will not have to configure, program, or otherwise mess with, other than to open and read it.
This definition removes the more common formats like ASCII, Word, etc, from consideration, and replaces them with formats designed specifically for reading electronic files. These specialized formats have names like eReader, MobiPocket, FBReader, Palm Doc, Microsoft Reader, Sony Reader and, among the better-known names today, Amazon Kindle. They have unique document extensions as well, including .LIT, .PRC, .PDB, .FB, .LRF, etc. There are many, many others, but the formats listed here are among the most commonly and popularly used. About the only such application that is well-known by e-book readers and non-readers alike, around the world, is the Adobe Reader (formerly known as Acrobat, which reads .PDF files). Most of these file formats are not designed to be edited by the consumer, and in general the electronic documents must be created in a different application, such as MS Word or Adobe InDesign, then converted to the chosen e-book format using special software tools. And most of these formats can only be read by the application software designed for it—Adobe Reader will not open an FBReader file, MobiPocket will not open a Sony Reader file, etc.
A notable bright spot (one of few) in all of this format mania is a relatively new format, developed for the International Digital Publishing Forum (IDPF.org) as an open-source, easy-to-use, pretty-much-universal format for e-books. The Open E-Book (OEB) format, usually just referred to by its file extension, .EPUB, has been endorsed by authors, publishers, hardware and software makers, and is rapidly becoming the default e-book format industry-wide in the Western world. Many of the hardware devices on the market already read OEB files, new software applications for computers and personal devices are being developed, and more e-books are being released in this format, by the day.
These files must be read in computer applications specifically designed to read them, like the Adobe (Acrobat) Reader is designed specifically to read PDF files. Many of these applications are available for computers of all kinds, usually free to download and relatively easy to install and use. Others are pre-installed on dedicated hardware devices, like Amazon’s Kindle device and Sony’s Reader device, and may be available as applications for other hardware to download and use. Most importantly, most of these reading applications can only read one or a few types of e-book formats, requiring the consumer either to restrict their e-book reading to those formats, or to keep multiple reading applications or devices available for whatever formats their latest e-book happens to use.
Notice: The waters are muddy already, what with multiple formats, multiple reading applications and hardware. And it gets better: E-books may be available at many e-book-selling websites, or sometimes only one; and the e-book sites sometimes offer multiple e-book formats, or sometimes offer only one format. So a consumer must juggle which books they want to read against which sites they are available from, which formats they are available in, and which reading devices or applications read that format.
~
This is the state we find e-books in at the end of the first decade of the 21st century. With such a litany of incompatible formats, reading devices and applications, is it any wonder that e-books have taken twenty years to take off? The scenario would seem to be designed by a sadist, and appealing only to a masochist! If it’s this bad, why shouldn’t we just put e-books out of their misery?
Because of the incredible potential of e-books, that’s why. E-books are like distilled literature, text reduced to its essential nature. It is not only capable of being read, it is free from physical constraints, and therefore malleable, flexible; its final format can be altered to fit the user’s needs. Fonts can be altered to styles, sizes and even colors that are easier on the eye. Paragraphs can be massaged to allow comfortable spaces between words and lines, and backgrounds can be adjusted for contrast as desired. Some e-book applications can read text aloud, and others can translate the text to other languages. None of these things can be matched by printed text-presentation methods. E-books represent no less than the natural evolution of literature to a new and better medium, the natural progression of text from clay, to stone, to hide, to linen, to paper, to electrons.
E-books are practical products. Electronic devices, by their nature, can hold numerous document files, allowing the consumer to potentially replace hundreds, even thousands of printed books, with the files on one electronic device. That means less physical space taken up by shelves of books… the number of books an individual can own is no longer limited by the amount of storage space they have available. And your collection of books is ultimately portable, ready to travel with you in its entirety at a moment’s notice.
E-books are environmentally-positive products as well, another advantage over paper. As pointed out, an electronic device can replace hundreds to thousands of printed books. Although an electronic device requires power, chemicals and precious (and sometimes toxic) metals to produce, so, in fact, does paper… and a single electronic device can replace literally tons of paper product. Finally, paper is made from harvesting trees, a process that is contributing to the denuding of our forests, the loss of carbon-sequestering vegetation, the loss of oxygen-producing plants, and the contribution to ecological destruction and global warming worldwide. We need trees more than we need paper, especially as we now have a clear alternative in electronic devices.
Finally, e-books’ nature makes them easily transmitted to the four corners of the world, almost instantaneously. The world’s knowledge can be shared and accessed from anywhere, not just in the corner of a community library, or in someone’s private study. The potential for sharing information, for raising the literacy rate of people all over the world, and bypassing many of the practical concerns that kept some of those people forever isolated, cannot be undervalued or dismissed. These factors are being discovered by more people every day, and are driving a resurgence of interest in e-books unlike any previous period in history.
Given e-books’ world-altering potential, it’s no wonder that an idea that has struggled for twenty years has not yet been abandoned… doing so would be a crime against technology, practicality, ecology and humanity. We have everything to gain, and nothing to lose, by sorting out the problems saddling e-book development and set it on the right course.
It is tempting to think that there is some sort of obvious villain in a story like this, some specific agent and their nefarious or misguided actions, that has been pounding e-books to a—pardon the expression—pulp for so long. But if it had been only one, or maybe even two or three such villains, they would have been identified and vanquished by now. It must be more complex than that, you’d think.
And you’d be right: It’s much, much more complicated. In fact, the villains in this story number so many that they could scare the Legion of Super Heroes into taking an early vacation. These villains, operating jointly and/or independently, have created a perfect storm of resistance that has kept e-books struggling against the waves for the past two decades. And thus far, every attempt to save e-books has only concentrated on one or a few villains at a time, leaving the rest to continue shoving e-books under the waves.
In order to deal with e-books’ problems, find solutions, and finally pull e-books into safer waters, we need to deal with all of the villains in the story. But in order to do that, we must know who all these villains are, in order to devise a strategy against them. Identifying all of these villains, so we know who we are fighting, is the purpose of this book.
~
In understanding technology, I have found it useful to understand history. Not just the history of technology, but the history of people… of politics… of social systems… even of geologic activity… for all of these things have impacted technology, its development, and its usage, over time.
This was the tactic of scholar/historian James Burke, whose series of videos and books known as Connections inspired me as a boy in college, to develop a full historical understanding of how and why technology had been shaped over the years. Burke’s later works, Connections2, The Day the Universe Changed, and After The Warming, continued the theme of understanding human technology via understanding humans, and the environments that influenced them over time, to make sense of technology, and of our present-day world.
As it so happens, my perspective on this subject is due to some unique “connections” of my own over the years. Though I was not directly involved in e-books during the twenty-year time period described in this book, my career paths have coincidentally paralleled the development of electronic files and e-books, allowing me a clear view of the developments over the years, and how they have impacted those inside and outside of the fields involved.
In the 1980s, my college years brought me into direct contact with young programmers learning all about the incredible new Big Thing, the mainframe computer. These programmers were already envisioning the ways in which access to computers in everyday life would transform our world. As for myself, I was already getting tired of Tank Wars.
In the 1990s, a friend of mine tried to get me into a job at a consulting firm, creating reports and graphics using computers. That specific job did not materialize, but I was instead offered by the same company a chance to manage their new reprographic offices. They had just installed high-speed reprographic equipment (aka copiers to the layman), connected through their office network, down the hall to the very document creators that I had initially applied to work alongside. During this job, I not only learned how to operate computer-controlled production equipment, but I learned how to optimize digital documents for printing, and I applied the latest in office computer software (at that time, Windows 3.1) to automate and modernize the department’s job tracking and billing systems. The digital document era was developing before me, and I was comfortably keeping pace with it. I watched the early development of Adobe Acrobat, and its subsequent dominance of other competitors in the fledgling digital document industry.
The consulting firm relocated outside of my commuting comfort zone, so I moved on to a downtown Washington, D.C. think-tank that needed help tying its high-speed networked printers into a daily 5,000+ document a day workflow. As I developed their networked workflow system, showing them better ways to turn computer-generated content into higher-quality finished products, I was also training myself in the arts of webpage design and production, what I saw as the next step in my professional career. Once I had finished updating the digital workflow system at the think-tank, I obtained a web designer’s job with a downtown government contractor. I produced digital documents for federal government reports and projects, and I created and maintained federal web sites.
I also learned about the concept of designing web pages to be compliant with federal regulations providing protections to Americans with disabilities, and the importance of adhering to formatting standards. The office was operating multiple computers, seemingly with a different operating system and set of programs on each one, connected through “sneakernet,” and using the desk phones as modems for the computers. Even in 2001, I knew they could do better, and I talked the boss into modernizing the office systems. I arranged to have DSL and a networked file server installed, and standardized their multiple computers with the same operating systems and software for easier maintenance and support, and consistent documents produced by any station in the office. I acted as the office IT manager for my duration there, improving their workflow and products and saving them thousands of dollars in IT needs.
During this time, I had done my web research, and developed the first generation of my own e-book sales websites. I closely followed the e-book industry, in order to optimize my sales model and improve my e-book products. The site has undergone two new generations since then, keeping up with sales trends and site design developments to present the most attractive marketing package to customers.
When work began drying up at the contractor, I resigned and, after just a few bounces, ended up at a Washington. D.C. non-profit maintaining their web site and digital documents. The company maintained a vast collection of documents on their website, which gave me the chance to learn the ins and outs of a content management system, and what it meant for digital documents.
These positions, working closely with digital document creators, controlling networked printing systems, organizing and standardizing hardware and software in business environments, developing websites, and producing my own digital documents, have kept me closely in touch with the various disciplines that have shaped the e-book industry since the 1990s.
~
I think the Connections approach is essential to understanding what has transpired in the e-book industry over the past two decades or so, to create the situation we e-book fans daily lament over. Some of it is also important in order to understand how the effects that have adversely impacted e-books can be countered and corrected. Many people do not immediately see the need to understand the historical significance of a present-day object or movement, but there’s an old saying: Those who don’t know history are doomed to repeat it. Many new players are entering the e-book arena every day, and these players, as much as the veteran players, stand to do as much damage as good if they pay no heed to the mistakes of the past, and continue to make them into the future. E-books could continue to founder on the rocks for another decade or more, or even slip finally into the depths, not to resurface again until all of the villains have passed on, history has been forgotten, and a new world, a few generations removed, can try again.
Personally, I don’t want to have to wait that long.
And so, this text illustrates the many, sometimes complex and sometimes thoughtless elements that have worked to keep e-books down for so long. The elements are presented roughly in order of their impact on the industry, which was sometimes chronological, sometimes technological, and sometimes dependent on another element’s actions, but all ultimately interrelated and significant. These aren’t presented as dry facts and footnotes; rather, it is the history of e-books from the perspective of someone who has walked alongside it, even when he was not aware of it, for the past thirty years, to suddenly find himself an evangelizer and authority on e-books to those on the outside looking in.
I offer my perspective on the painfully-plodding history of e-books to those who seek to enjoy e-books, and possibly to be a part of the developing industry. Hopefully, a full understanding of the many elements involved will help us to guide e-books back off the rocks, and onto a course that will result in a happy, healthy industry in time. At the very least, it will hopefully stave off another twenty years of what we just went through. (I couldn’t take another of those, myself.)
1: Traditional publishing—The Castle mentality
Once upon a time, in the 1400s—
—Okay, our story doesn’t really begin that far back. But the simile applies, so let’s pull up our leggings and get set for a quick history lesson. Back in the 1400s, feudalism ruled in most of Europe. Men with money bought or grabbed land, and bought or grabbed men to help them hold on to it. They constructed community homes called castles, where they and their most trusted men, their women and their servants would live.
Outside the castle were the peasants, the people who worked the land owned by the landowner, providing the food for the landowner, cloths and leathers for clothing, and assorted knick-knacks for daily living. In exchange for this work, the peasants received a modest share of the food and goods, and protection from outsiders… in theory. In practice, however, the share of goods were often meager, depending on what was left after the landowner and his trusted inner circle had their share… and the amount of protection given to those outside the castles was often minor to non-existent. (In fact, after a time, most landowners tended to respect other landowners’ property, so there was rarely anything to be seriously protected against.)
For the peasants who lived outside the castle walls, the choices were few: You either did your local landowner’s bidding; or you left his land, and in exchange for living on a new piece of land, you had to work for that local landowner; and things were rarely any better from one landowner to the other. A third possibility was to do something that would get you into the castle as one of the landowner’s trusted inner circle. But this was even more difficult to accomplish, as you had to have a skill or possession so impressive as to beat out all of the other peasants who wanted into the castle as bad as you. In general, the majority of the peasants would never see the inside of the castle throughout their lives.
The landowners were well aware that their resources, their comforts, and their space, was finite. In order to maintain their standard of living, therefore, it was in their best interests to keep most of the peasants outside of the castle and out of their valuables, maintaining a distance whilst meting out resources, and trying to maintain enough of an air of authority and promised security to convince the peasants not to leave and work someone else’s land.
One of the ways landowners did this was to convince the peasants that those in the castle were their betters, special by way of birth or right, and deserving of their respect and service. Often they adopted titles, like Lord, King, Master, etc, to more formalize the difference between themselves and their workers. Other landowners maintained control by physically dominating and intimidating the peasants, making it hard or impossible for them to do anything but what the landowners told them to do. Thus, by either psychological or physical manipulation, did the landowners stay in power. Needless to say, it was no fun to be a peasant at the time… but great fun to be a lord.
It was a precarious balance of social and economic systems, constantly under threat by either side: If the landowners were too cruel or stingy, or too weak, their peasants would not work well, or deserted them, and the valuable land went fallow; or they might storm the castle and take over, temporarily gaining the stored valuables inside, but usually ruining their local economy in the process; and if the landowners didn’t know how to manage, all the industrious work of the peasants could be wasted, their work uncompensated. But this was the way of the world, and it persisted for centuries, even after the beginnings of the Enlightened Age created a new Capitalist society that heralded the things to come.
~
The abbreviated history lesson was required in order to illustrate the parallels between the old feudal society, and the modern western publishing system as it has stood for roughly the last century. The ironic fact is, despite the greater knowledge of a newer age, and the publishing industry’s image as generally being led by the more learned of men, publishing’s feudal system was inspired by the same fears as the 1400s landowners, and has been maintained with essentially the same tools.
The printing revolution kicked off by Gutenberg’s wonderful invention actually took a few centuries to stabilize into the well-run capitalist machine that we are all familiar with today. Previously, documents were created by individuals working manually to reproduce texts, whose work rarely extended much past their local area, and quality and subject matter were all across the board. As the young printing industry developed, it had a few major roadblocks to overcome, including the image of books as “elite luxury” items, the low literacy level of the bulk of the population, the challenges of international markets, and the cost of the books themselves.
The 1700s saw many of these challenges taken on by the more forward-thinking nations in Europe, as well as the Founding Fathers of the new nation to the west, the United States of America: New laws were set down to enact a semblance of control over individual published works, as an incentive for more creators to create; known as copyright laws, they established a period of time wherein a published work would be considered by law to be exclusive to the creator, and thereby guarantee any profit from the work to him. Thanks largely to copyright law, it became possible to make a decent living off of writing and publishing your own work.
Inspired by publishing’s newfound ability to make money, individuals banded together to form the first publishing houses, organizations set up to make a major business out of publishing. Most of the first houses would be dedicated to certain types of content, or to satisfying a particular class of individual with the content they craved. The content itself could be anything, from the most cherished of fine literature and reference books, to the “penny dredfuls,” cheaply-made pocket-sized books that often contained racy and unrespectable stories… the equivalent to today’s tawdry romance novels and cheap pornography.
By the 1800s, publishing houses were establishing themselves as respectable businesses throughout Europe and the Americas. They banked upon their status as learned men, and leveraged that reputation heavily, to the extent that certain publisher’s works might be sought out by those in the know, as being more carefully edited, more appropriate to the audience, and more finely crafted products. It might have been commendable to have written a book; but to have it published by a major publisher was especially impressive, and such an accomplishment earned writers more respect with their peers. At this time, those houses could be thought of as Big Publishing (which I shall also refer to hereafter as Big Pub), an industry in its own right.
The 1800s into the 1900s saw Big Pub, like many businesses, taking liberties with the somewhat lax controls over businesses and trade in general, to establish exclusive agreements with their partners, essentially agreements to work together to maximize profit, and incidentally to quash the efforts of competitors, wherever possible. This was the beginning of the movement to marginalize all writing and publishing outside of the established Big Publishers and their partners: At the business end, contracts and trade agreements forced outsiders to either play ball with Big Pub, or go home; and at the consumer end, access to content was being increasingly restricted to the output of Big Pub.
There were, of course, small outfits that still produced printed matter… a basic printing press could be operated by a single person out of a garage or basement, and there was nothing Big Pub could do to prevent that. In order to combat the threat of the little guys, therefore, Big Pub took the elite track: Essentially they waved their credentials, highlighted their expensive printing machinery and ensconced themselves into modern and ostentatious offices, and began a subtle campaign to convince the public that their trappings were a reflection of the quality of their product; and in fact, absolutely required to make printed matter a quality product. This campaign was even applied to prospective authors, used to cajole a desired author to work with a particular publisher over another, and thereby cement their superiority over others. Anyone outside of their sphere of influence was by extension considered of lesser quality or authority, or at least influence, among authors and consumers, and incidentally marginalized at the retail level to suggest a lesser quality and popularity. This created a self-perpetuating process of outsiders simultaneously supporting and clamoring for entry into Big Pub’s inner circles, and maintaining the status quo of the industry within and without.
To be clear, this wasn’t an organized marketing campaign at work; it was psychological warfare, waged in the offices as well as on the streets; comments made here, decisions made there, which seemed to indicate that literature produced outside of the system was no good; and exacerbated by a sales system centered around the publishers that shut out non-publisher material, another implied slight as to its quality. Big Pub used its influence to suggest at any opportunity that any works, other than their own, were not worth looking at, and through their sales and marketing tactics, made sure other elements learned the same lesson. Vendors and consumers alike bought into the campaign, mainly because it profited them to do so… and after awhile, they were spreading the same message, like duly-indoctrinated members of a populist or religious movement.
So: In order to maintain their sovereignty and retain their property and profits, Big Pub had created a façade, a castle, placing them as the lords of the industry, establishing a clear boundary to keep out the undesirables (the consumers, and to a great extent, all but the authors upon whom they directly profited), and using physical (contracts) and psychological (marketing) tactics to maintain their superior status. The fix was in, and everybody was in on the gag.
~
This is the state of the modern Big Publishing industry: At the beginning of the twenty-first century, being ruled by fifteenth-century thinking. And because this thinking has allowed them to maintain their status quo and profit for so long, Big Pub has seen little or no reason to change, and has steadfastly defended its castle walls against all comers.
Until recently, there has been little to challenge Big Pub: They had been incredibly successful at establishing themselves as the ones in control of the publishing industry. Big Pub was the perceived nexus of the collaboration between authors, printers, paper manufacturers and retailers, any one of which could have equally ruined the entire industry if it had collapsed on its own… but Big Pub had assumed the position of the brain of the entire organism, the controlling entity, in full control of the entire industry’s actions. Big Pub had calmly and deliberately tweaked every aspect of the publishing process over the decades, and as a result, had reduced the procedure of publishing to rote and formula, from the felling of a tree, to the ringing of the cashier. Their methods were not to be challenged, because they were optimized to provide the maximum return on investment, with Big Pub naturally pocketing the lion’s share of the profits.
There have been very few challenges to this formula over the past century. A challenge of the early twentieth century, the paperback book, made possible by cheaper printers and improved printing efficiency, threatened to undercut the inherent value of a book as a valuable object… whereas a hardback book was large, impressive and expensive, saying something positive about its owner, almost anyone could afford a paperback book. Big Pub dealt with this challenge by producing its own paperback material, considered to be of a more base nature and lesser quality, in the paperback formats, simply in order to win the money of the less discerning buyer and crowd out the small-time early paperback producers. Later, they would re-release their hardback books in paperback form, again to win increased sales for the original work without having to put in the writing and editing for a new book. Eventually the paperback became a legitimate product in its own right, and the transition from hardback to paperback in the production lifespan of a book became a standard procedure. But the hardback book was always held up as the premium product, to which everyone should aspire, and is to this day.
Another challenge to the existing publishing industry was the sudden gain of leisure time witnessed in the mid-twentieth century, similar in effect to the gains of the nineteenth century. This resulted in a surge in the number of hobbyist writers and professional writers seeking to expand their own markets. These writers duly descended upon Big Pub, as they had been pre-programmed to do by Big Pub itself, and quickly began to overwhelm the publishers. The industry responded by augmenting and promoting the value of the Agent, a middleman designed to act as a pre-publishing filter and reduce the amount of material arriving at Big Pub’s door. Although the agent worked for the author in theory, he was really an agent of the publishers, knowing their ins and outs, and negotiating its deals more according to the publisher’s demands than the writer’s.
It was at this stage that I was introduced to Big Pub, in trying to get my first novel published. I was certainly prepared to deal with turn-downs, critiques and slush-pile realities, knowing what I did about the industry. However, I was not prepared to be told not to bother to send anything at all. It seemed that the publishers were so full of work, and so sure that they were not passing up on a potentially good thing or author, that they were not even accepting submissions. They had totally walled themselves off, and cut the chains that opened the gates. Even their agents were telling me not to bother, as they had plenty of clients already. Big Pub had become completely isolated from the very people who purchased their material, and those who might provide them more material to sell.
This establishment and successful maintenance of formulaic control has resulted in an industry that is not only complacent, but happy to relish in their complacency, even tying it into their façade of superiority by saying that because they are so good at what they do, they can afford to be complacent. Big Pub has used this strategy to downplay and ignore most efforts to modernize and improve any aspect of the publishing process, ably demonstrating their established formula as superior to any new scheme, and emphasizing their desire to maintain the façade, the castle as they had built it, and defend it against any effort to reduce its sovereignty.
But this strategy has left them vulnerable to major changes from without. Like a medieval castle that hopes to withstand a World War II howitzer or a cruise missile, it is only a matter of time before the walls will be breached.
~
When e-books appeared in the 1990s, mostly as manuals written by programmers and amateur fiction written by fans of science fiction, fantasy or pornography, Big Pub had little reason to take immediate notice. The very limited phenomenon of creating and sharing electronic files on storage disks seemed related more to computers than books, and computers were only then beginning to make a significant impact into businesses, with an eye to the home market on the horizon, thanks to the efforts of Microsoft and Apple. In addition, literature did not seem to be the dominant media being shared by computer users: A quick perusal of the many disks available in legitimate and less-than-legitimate sources indicated commercial interest for electronic files was primarily about software, games and dirty pictures.
Fan fiction, of course, had been around for years by then, especially in the form of typed and copied short stories and novellas distributed by hand amongst fans of a particular TV show or comic book series. This material was shared freely, since fans generally were more interested in someone reading their material, and telling them how good it was, than to try to make money off of it. Besides, Big Pub had already applied the rules of copyright to establish a mind-set in the consumer sphere that any attempt to capitalize on someone else’s work would result in painful lawsuits and public humiliation—a rare action, but the cost of falling victim to one was enough to keep consumers at bay. Not to mention the implication that “fanfic” was by definition badly written, by virtue of the fact that it had not gone through the Big Pub machine. Amateur writers accepted this without much fuss, and didn’t ask a dime for their work, so the publishers let them alone, a sign of their magnanimity and tolerance of the hapless Little Guy.
The ongoing proliferation of computers, with their inherent ease of producing and disseminating documents, did little to phase Big Pub. They foresaw, at most, that the same unimportant fanfic would flit about from individual computer to individual computer, still not making the amateurs any profits, and still not threatening their own profit base. To be fair, they were as much taken by surprise by the meteoric rise of e-mail, and then the internet, as most industries, no more or less. But by the 2000s, computers had developed by leaps and bounds… and it was the amateurs that were taking the raw potential in the hardware and software, and creating masterworks of programs, games, and new media types, forcing the big boys in the computer industry to struggle to keep up.
A combination of factors finally caught Big Pub’s attention. One was the occasional sighting of a document, some piece of literature that someone had gone through the trouble of transcribing into an electronic format. The other was the rise of the e-mail attachment, allowing the e-mailer to send a copy of the attachment, each as good as the original, to any number of recipients with the press of a button. And finally, it was the realization that there were people out there who were perfectly happy reading those documents on their computers, or printing them out into letter-sized pages, instead of buying carefully crafted and formatted books. And still, Big Pub thought nothing of it. One or two books, sent to a few people, were no threat!
In fact, only one group saw a threat early on: Textbook publishers. Their niche was fairly unique in publishing: They commanded a captive audience for their works, and because that audience was relatively small, and their works were generally only usable for a few years (given regular textbook rewrites), they could charge large sums for their books to finance R&D and rewrites. In upper level schools, students had been talking about digital textbooks for years… and some of them were smart enough to actually create and disseminate texts of their own, if they were so inclined, putting direct pressure on educational publishers to look into the e-book thing. But these learned people could not figure out how to make a viable financial model out of e-books that would be nearly as lucrative as their print-based system. A few test programs were attempted, but there was always the impression that it was a foregone conclusion that they would only support the predetermined findings, that ed publishing couldn’t do e-books. Maybe next year. They were among the first publishers to investigate e-books, and one of the first to pass on them… but at least they’d acknowledged that they existed.
Finally, a few intrepid readers began contacting publishers with a new idea: “Why don’t you release your books as ‘e-books’? If you did, I’d buy ‘em.” But these requests were too few and far between, and Big Pub hadn’t even looked at “e-books,” whatever they were. So the requests were easy to ignore… and in the process, Big Pub missed out on the another quality of the nascent internet: The ability to spread electronic “gossip” and comments about anti-consumer sentiments and actions amongst millions of people, literally, overnight. By taking no action, Big Pub went from nigh-omniscient authority on literature, to Luddite organization holding back progress, before they knew what had happened.
~
When e-books started to make a mark in the computer-savvy public’s consciousness in the 2000s, Big Pub was woefully ignorant of this supposedly future version of the very product through which they made their living. But they had recently watched as the music industry had been thoroughly drubbed by the ascension of digital music, and they were well aware that a lot of consumers were suddenly turning away from the bloody MP3 tableau, looking at Big Pub, and saying, “You’re next.”
Unfortunately, they were unprepared to mount a serious effort into the adoption of e-books into their finely-honed, now-decades-old system… in fact, many insiders still insisted it wasn’t worth the trouble. And in the absence of Big Pub influence, the e-book market had taken advantage of a total lack of industry or standards guidance to begin development in a number of directions, leaving the 2000s e-book landscape looking disordered and confusing at best. So the initial research by Big Pub revealed a chaos of standards, formats, selling methods, bad habits, predatory practices and lack of regulation. Trying to straighten all of that out, they realized, would require teams of programmers, media blitzes, and the assistance of major corporate players in the computer and internet landscape. And every publishing company knew they did not have those kinds of resources available, or if they did, did not want to spare them for fear of eating through their profit margins. Before Big Pub had even entered the gate, it seemed that the race was already lost to them.
So, they fell back on Standard Operating Procedure… in this case, giving the impression that Big Pub was perfect as it was, and there was nothing in the e-book world that would ever impact them. They told themselves the whole thing was a fad, and would soon go away. More importantly, they told their vendors, their stores, and their consumers the same thing. A few publishers made some efforts for consumers, to make it look like they were actively pursuing an e-book agenda… but in reality, they were token efforts, with no budget or personnel applied to them, no decisions made, and no clear directions established.
~
Only recently, with the sudden onslaught represented primarily by Amazon and the Kindle store, has Big Pub finally been forced to take notice of the e-book phenomenon. Unfortunately, many of them are either unwilling or unable to accept that the mechanics of a computer-based industry are radically different than the horse they have been riding for the past century. Many of the elements are radically different, some are new, some older elements no longer needed at all, and the delicate balance between writer, editor, supplier, producer and retailer has been forever changed in the new, computer-dominated landscape.
Some of the Big Pub familiars are working feverishly to adopt to the demands of the new world. Not all of them will succeed, however. For many of them, the challenges of retooling a business system for a new era will tax them beyond their abilities, and they will either have to concede defeat, or fight to the last until they go down in flames.
Other publishers are defiantly sticking to their guns, hoping to stave off progress, supposedly, at least until the present executives are all ready to retire with their accumulated fortunes. These companies will abruptly, but predictably, sell off their assets and shutter themselves when the higher-ups decide the business is no longer viable, and leave a wake of discarded partners and employees wailing after them.
In the end, some of both groups will undoubtedly face the same end: Collapse, a buckling of the old infrastructure under the weight of the new. Some of the original Big Pub players will survive, and will join new players in the publishing arena in picking up the pieces of the twentieth century, and trying to build a publishing scenario for the twenty-first.
2: Enter Windows—The modern computer era begets the Format Wars
Today, even some veteran office workers have to think hard to remember a business environment before there was Windows. It’s a bit easier for people to remember a home environment before computers… provided you are older than thirty. The computer has taken its place as one of the most transformative appliances of the modern age, up there with the television, the telephone, the radio, and the stove.
When computers were first introduced to the business environment, they were mainframe devices used for specialized data storage or number-crunching tasks, and run by electrical engineers who spoke strange languages like FORTRAN. Their exotic nature created an air of mystery and fascination, and this extended to the men and women who operated them: They were modern sorcerers who spoke in tongues to the magic machines, and produced incredibly fast and precise numbers at will.
The early desktop computers, mostly running DOS or some similar text-based operating system, were considered too complicated for any but the most highly-trained workers to master, so the first programs were designed to be used by mildly-trained workers, of which there were many more in abundance. I was one of those early workers, using DOS-based accounting and word-processing programs to keep track of sales orders. It was a bit clumsy, but it did its job much more efficiently than manually-entered figures in paper ledgers.
But a few people saw a market for easier-to-use computers that could be sold to just about everyone, and one of them, an IBM employee named Bill Gates, started his own company to develop and market an IBM idea that had been passed over for development. The company he formed was Microsoft… the idea, a user-friendly graphical interface over a DOS substrata, became Windows… and the rest is history.
Though other computers with similar operating systems were being developed at the same time, Microsoft followed its business roots and marketed its machine to the business market first… a shrewd move, since companies could buy in bulk and creatively massage the expenses into operating costs and tax writeoffs. Microsoft also continued to write other software packages, the best of them designed to run on their Windows platform, the idea being to promote more purchases of Windows to run the desired programs. And although the lax business practices of the early twentieth century had mostly been tightened up since then, Microsoft was able to take advantage of what trade loopholes there were to begin the process of shoehorning its way into stores and marginalizing its competitors, in much the same way that Big Pub had done it before them.
As many Americans were already used to the idea of bringing their work home with them, it didn’t take long before the new machines for the office were being purchased for home offices as well. Windows quickly became ubiquitous in the U.S., and not long thereafter in Europe and Asia, overshadowing even its better-made competitors and becoming virtually synonymous with computers in the modern era.
~
Business was largely paper-based before computers. Documents were first written by hand, then typed up by hand in typing pools, stored in notebooks, reproduced on copying machines, notated by one or more people, then sent back to the typing pool to be retyped, stored in notebooks, reproduced on copying machines… ad infinitum. The amount of paper used by modern business was monumental, and the waste generated by used and discarded office paper was gargantuan.
Computers were designed, though maybe not fully intended, to change that. They introduced the electronic file to the standard business lexicon, and began the shift from moving paper from place to place, to moving disks from place to place, or accessing files from central servers that connected multiple computers. Enough text to fill a 3” binder could now be fit onto a 2.5” floppy disk, and a literal library of documents could be stored in a shoebox.
This new way of handling documents inspired plenty of forward-thinking office workers, who began conceiving of and developing new and better ways of using electronic files beyond the capabilities of paper. New software was developed to ease the process of collaboration between multiple people—that is, multiple computers in different locations—and shared files. Microsoft detected this trend quickly and, wanting to cement its market share of the business environment, began developing new tools for Windows to support collaboration and new work patterns.
Microsoft’s desire to dominate its almost single-handedly-created market seemed to be matched only by the desire of many businesspeople to push computers back, to slow their advance into the business market. Some saw the trend to computerization as disruptive, overly-complicated and bothersome, while others resisted the Windows platform in particular over other operating systems that were more robust and less crash-prone. Microsoft fought against this trend in a notoriously heavy-handed way, essentially throwing its new commercial influence and monetary might around to force retailers to feature their products, drive competitors out of business, or buy up competitors with the idea of assimilating or dismantling them.
In following these business practices, Microsoft ushered in the adversarial nature of the computer business, pitting competitor against competitor, and users of one OS against users of another, in a constant and escalating battle of one-upmanship and domination. This environment frustrated those who saw the potential of Windows and/or other OSs, but who were getting tired of the reality of plodding advancement. And as many of them were programmers, college-taught or self-taught with the idea of helping to usher in the new computer-dominated era, they decided that they didn’t have to wait.
~
When computers were still in their mainframe stage, they had created an upsurge in popularity that inspired students around the world to take programming courses, and prepare for the computer age. I was starting college at that time, and I remember the pressure to try your hand at languages like Fortran and Cobol, while they were hot. In contrast, the reality presented by Microsoft was far from the dream those programmers carried. Instead of controlling mainframe systems, computers would run independently with commercially-prepared applications, and a minimum of programmer input. Many of those programmers felt the new computer systems would marginalize them, and either on their own or in groups, were convinced that they could beat Microsoft (and its close rival, Apple) at their own game. And thanks to the open architecture nature of computers, they could build their programs to run on existing computers, which were already commonplace… so anyone could use their programs as they did.
Therefore, when these programmers were faced with a piece of software that did not do what they wanted it to do, their solution was to write their own versions of those programs. Some borrowed heavily from existing programs, while others worked from scratch, to customize their applications to their desires or the desires of their groups. But although they were willing to go to such lengths to customize their programs, they were not as interested in making sure their programs could read already-standardized document formats. In order to work on their programs, document files had to be custom-written as well, its code optimized for the application that would run it.
Enter the Format Wars… in my opinion, probably the absolute worst single thing ever to happen to computers. From here on in, it became standard operating procedure to create a new format for every new application, and even to tweak the format when the application was upgraded, often restricting the format to the version of the application for which it was created. The added complexity made even the simplest alteration to a program a nightmare of fixes for multiple versions of programs, and each of their document formats. And very often, this complexity rendered a popular and useful program to be buggy and crash-prone, inevitably inspiring some young programmer to write a new alternative to that program, with its attendant new document format, and on, and on…
E-books were not spared by the Format Wars. For every major company that thought it knew the best way to format e-books, a basement programmer thought he knew a better way. And thanks to the newly-developed World Wide Web, it was as easy for the programmer to disseminate his program as it was for a major business. At one point, there were literally too many e-book formats to count; today, there are probably more than a score of e-book readers and formats being used by someone, somewhere; and even narrowing your focus to the most popular will turn up at least half a dozen formats vying for superiority.
Until recently, there has never been a consensus of which format (or even two or three formats) was superior to the others. The formats were all too similar, while each had a unique feature or two that was highly prized by its users. As a result, the only formats that died off were those that were neglected by their creators or user groups; none of them “merged” together into a new, better format amalgamation.
But as publishers sought to enter the e-book arena, the plethora of formats confused and frustrated them. It made little sense to commit to the time and cost of creating 15-20 different versions of one e-book, to satisfy so many document formats. Yet, there were few clearly dominant formats to choose, no way to confidently select the one or two that held the overwhelming market share and leave the rest. And what if future software improvements meant going back through the library and making changes to the book formatting? The proposed logistics of an entry into the e-book arena was clearly too much for Big Pub, and a stretch even for smaller publishers with much more open minds.
This is why it was the small publishers that made a splash in the arena first. Companies like MobiPocket, which backed its own format and application to go with the books it sold, and Fictionwise, which would sell books in multiple formats, developed web presences and started to make sales, while the Big Pub houses tried to reconcile themselves with the knowledge that most of the e-book stores’ content consisted of amateur and independent authors and small publishers, all of which were considered to be “substandard” by the majority of the public (thanks to Big Pub’s efforts), and would not last long.
Interestingly enough, one of the earliest and most ubiquitous e-book formats isn’t considered by many to be an e-book format at all: Adobe’s Reader format (also known as Acrobat, or by its file extension, PDF) has been the dominant electronic document format since the 90s. However, the Acrobat format was primarily created to “lock in” a document’s layout, so that it could be moved from one computer to another, always look exactly the same on-screen, and always give the same result when printed. The format was quickly adopted by business users, egged on by the aggressive marketing strategy of Adobe, and has become a default document-sharing format for businesses worldwide. But although the format has undergone changes over the years, making it much more suitable for a variety of digital reading devices, it is considered too bloated and print-centric to be a “proper” e-book format, and is being bypassed in favor of simpler, more compact formats.
Another potential format for e-books is HTML, the coding used to render web pages on a browser. A number of the various e-book formats are actually built on versions of HTML, or based on HTML coding. This would suggest that HTML, read on browsers optimized for e-books (as opposed to web pages), might be an ideal format for future e-books. However, this logic has somehow escaped those who have tried to develop the “perfect” e-book format, or it has been assumed by programmers, apparently too used to taking the hard way out of every easy situation, that HTML was not robust enough to present words and occasional graphics to a user.
~
Recent developments have greatly simplified, though not extinguished, the Format Wars and its collateral damage. Two of the developments are unified efforts to create a dominant format, but from differing perspectives, while the other development is being driven by a different sort of economy, and is even further behind than the commercial e-book market.
The first development was the creation of the International Digital Publishing Forum (IDPF.org). The IDPF set out to create a common e-book format that would satisfy the majority of e-book users… in essence, no different from previous efforts, with the exception that they hoped to create an “open-source,” or non-proprietary format that any other organization or individual could use. They planned to oversee the format, and vet and either authorize or veto any changes to it, to make sure the format remained easy to use, fully documented, and widely implementable. The format eventually introduced and recommended to the world is known as the Open E Book (OEB) format, now more popularly known by its file extension, ePub. The IDPF has openly and widely encouraged the use of the OEB format, and as its non-proprietary nature means no licensing is required to create documents, or use (or create) readers, it is garnering a lot of interest right out of the box. Today it is already well on its way to becoming the dominant, possibly even the default, international e-book reading format and platform.
The second development was the entry into the e-book market of Amazon.com. When Amazon, already a powerful retailer, decided to move into the e-book field, it planned to take advantage of its established catalog of content and web-based selling infrastructure by manufacturing a reading device that would connect wirelessly to their catalog and allow fast and easy purchasing and downloading. Their device, called the Kindle, was widely anticipated, and strongly marketed by Amazon when it was introduced. It quickly became the dominant dedicated e-book reading device on the market, in no time becoming a default trade name almost as well-known as “Kleenex” or “Aspirin.” Amazon purchased Mobipocket, already considered one of the dominant e-book format platforms, to provide a format for the Kindle device, and Kindle’s popularity has extended MobiPocket format popularity appropriately. As the Kindle store grows, expect to see more content created in MobiPocket so it can be sold to Kindle users.
The third development hasn’t exactly come out of left field, but it seems it was out there an awfully long time before making itself heard. The world of educational texts had repeatedly dipped its toes into the e-book waters since the 1990s. But if Big Pub had an established system in place that they were loath to break, educational Big Pub was even more deeply entrenched. Educational publishing, with its more narrow markets and tighter profit margins, its highly competitive nature and its need for greater accuracy in its product, could not conceive of making changes to their publishing machine to accommodate e-books, and held fast against them as long as they could.
However, the recent downturn of the economy has put students and institutions in a financial bind, and the textbook industry is being forced to deal with a smaller, more demanding market. Publishers are now developing e-book versions of their texts, which has raised a new question: Are any of the existing formats well-suited for textbooks, with their extended cross-referencing layout, pictures and diagrams, etc? As some publishers are looking forward, and examining devices like the Kindle, others are returning to the venerable Adobe Acrobat format, based on a preponderance of legacy material already saved in the PDF format, and the fact that PDFs can be read on the computers and laptops that are becoming standard issue on college campuses.
It is too early to say which format or formats will become truly dominant in the industry… or whether we may see a new format arrive and eclipse all the others in due time. The present uncertainty may be much less than it was, but it is still acting to slow the adoption of e-books into other markets and industries.
3: Printing—The shotgun marriage of paper and computers
Paper, of course, has had a special place in the business world since it was invented… which is to say, paper and business have been together for a long, long time. After all, business transactions need to be recorded somewhere: Before there was a way to record transactions, there was essentially no business beyond very basic trade. Recording transactions made business memory possible, and allowed more elaborate and far-reaching business to be conducted. Paper made those transactions easier to record than making impressions on clay tablets, and cheaper than making marks on skins, so it was instrumental in making business better.
In the latter part of the twentieth century, paper was more important to business, and more encompassing than ever, thanks to the introduction of modern inventions designed to allow printing and reproduction right at the office. Adding carbon paper to the already-perfected typewriter was only the first in a series of steps, which became an order of magnitude more significant with each iteration: The mimeograph machine allowed paper copies to be mechanically reproduced in a single minute, as opposed to an hour; then the photocopier further reduced that same job to a few seconds, and started taking on more elaborate tasks such as collating and stapling, to speed up the process even more. All of this helped twentieth century business grow ever faster, and replace farming and other manual labors as the dominant activity of the planet.
The introduction of the computer threatened, at first, to remove paper from the business landscape; there was, after all, not much that you absolutely had to put on paper that wasn’t as good in an electronic file. But general business practices, much like those of Big Pub, were by nature conservative, and those conservative elements had been using paper for… well, forever, as far as they were concerned. The very idea of removing paper from the business processes seemed not only absurd, but downright impossible. It would require no less than a complete top-down reworking of almost every business system and practice, and like Big Pub, few businesses wanted to get into the ramifications of that.
One of my earliest “real” jobs (as opposed to the no-future jobs I seemed to excel in for years) involved taking electronic documents created by a creative team down the hall and transmitted down the office network, and printing them on computer-run high-seed printers. The idea was to create the highest-quality documents possible, and process them as fast as possible. But they were still being printed in multiple copies onto paper, packaged in notebooks or bound documents, and physically shipped from place to place, because the clients demanded physical products, and our associates did not understand the value of saving money by producing electronic documents only.
So, despite computers’ paperless promises, the conservative heads of business continued to demand good old, familiar paper. And as documents were now being produced on computer almost exclusively, that meant it was of the utmost importance to find ways of turning efficient electronic files into… inefficient paper documents.
~
The laser printer was invented as a way to allow individual computers to output documents onto paper quickly. Essentially a new type of photocopier, more sophisticated in some ways and simpler in others, laser printers became standard business accessories overnight, and the flow of paper continued unimpeded. But they were not considered ideal: The first laser printers ran slowly and only produced one copy of a document at a time; and there was a matter of inconsistency between the outputs of different computers and different printers—if the hardware, software and settings on either one were different from the hardware, software or settings on another computer and printer, the layout of the prints could come out differently. And computers were still too new and complex for most users to figure out these complex settings, leading to confusion and frustration in offices worldwide.
And there was another problem: Business was moving along too quickly, further sped up by the introduction of computers and the internet; paper simply couldn’t keep up, even with overnight delivery services and couriers working feverishly to accommodate them. There was a need to speed up the movement of paper, by using the brand-new networking capabilities provided by the Internet and the World Wide Web to create a document here, then send it to be printed there. But there was also a fervent aesthetic desire to make sure copies looked the way you wanted them to look, and the same from printer to printer, and fax machines didn’t provide good-looking paper copies. And they all still wanted paper to come out at the end.
Engineers at Adobe initially dealt with the problem by devising a printer formatting language that they called PostScript. PostScript helped to standardize laser printer output, no matter what the brand or model, so that electronic documents could come out identically from multiple printers. Unfortunately, that didn’t solve the problem of the computers having different compositional settings, which could still render printed documents differently. Adobe went back to the drawing board, and later developed a document type designed to solve the problem of computer inconsistencies. The document type was known as the Portable Document Format, also known by its file extension, PDF.
PDF files actually displaced PostScript on most computers and printers, for it essentially did the same job, but better—a PDF file would look the same on any computer screen, and still look the same when it was output onto paper from any printer. Adobe marketed the PDF format, and the requisite Acrobat PDF reading application, aggressively, giving away the readers and PDF-generating print drivers with every Adobe product and most computers, and eventually publishing PDF as an open format, winning endorsement by the International Organization for Standardization (known by their establishment of the ISO standard), finally beating out competing document formats by other companies. Adobe continued to augment and refine the PDF format, adding features that would make it more versatile while retaining its ability to look the same when output by any printer. The hard-won aesthetics of paper documents assured, PDF-dominated printed paper continued to rule the modern business world, and persists to this day.
~
Big Pub was already familiar with the tricks of getting electronic files to paper. Before computers had reached Big Business’ desktops, larger and more specialized computer/printer hybrids were already common in printing environments. These hybrids used a computer to create a digital file that would then be scanned onto a film to create a sheet of type, or “galley.” This film galley would then be taken to a printer and used to mass-produce book pages. The process made it faster and easier to check and correct content, and the film used for printing made for a cleaner end product. Faster meant cheaper, and better-looking product meant more sales… this was clearly enough of a reason to adopt the new electronic systems for Big Pub.
However, Big Pub was still conservatively run, and simply lacked the foresight to see the value of retaining the electronic files used to create their galleys. Once a galley was run, okayed, and sent to the printer, the electronic file that generated it was erased. To be fair, older electronic equipment was not as efficient at file storage as modern machines, having much more limited hard drive space, sometimes no on-board storage space at all, and needing large plastic disks to store only a few kilobytes of data each. It was seen to be more efficient to re-use the data disks for the new projects, rather than store the data files away for future use. On the other hand, the time and expense of typesetters and editors certainly did not equal the cost of a few plastic disks. Even after subsequent prints of a document would require a fresh typesetting, editing and okaying run, the practicality and value of saving electronic files did not impact Big Pub.
As a result, most manuscripts printed up to today exist only in paper form in the hands of Big Pub. The potential task of converting all of that literature to electronic files is a daunting one, requiring either the dedicated activity of typesetters, or of people manually scanning each page, then checking and editing, to finally result in exactly what they had on floppy disks years ago, and which could have been transferred to modern formats for use today. The cost of such an undertaking is equally daunting, and as yet, Big Pub has refused to spend the money or commit the manpower to such a task. Only recently have some publishers begun to reapply the electronic files created for the print run to a new production stream, to create e-books for sale. But not every new book gets this treatment, leaving a great deal of literature whose chances of becoming e-books are seriously hindered, first by lack of foresight, and now lack of resources.
~
The recent establishment of on-demand printing is beginning to change the printing landscape again. On-demand printing of literature essentially takes the now-familiar electronic file, prints it onto pages, and binds the pages into something resembling a professionally-printed book. Though the technology for this is not new, only recently has it reached a stage of relative cost-efficiency to make it a feasible proposition.
On-demand printing allows a producer to create a single copy of a manuscript for a consumer, using what is essentially a customized laser printer and built-in binding station, and the latest machines are capable of producing a single book in less than an hour. The cost per copy cannot compare with a significantly more efficient mass printing run, and on-demand books are often sold for two or three times what a mass-produced version would retail for.
But the point to on-demand is that there is no mass printing being done for these books, either because no major publisher has expressed an interest in the book, or because the book has already been run, and the publisher has no interest in doing a second run. On-demand allows single to small numbers of copies to be run for individual buyers, creating a micro-market for printed books that had so far been unserved by the publishing industry.
On-demand printing is now being touted for those who are still more comfortable with paper than with electronic files. In serving the previously-ignored individual consumers, it further broadens the reach of the paper economy, and reconfirms its worth to users. Major segments of the printing industry see this as the future of publishing, creating books on-the-spot for customers, and thereby removing the process of mass printing, and the risk of printed and unsold books that will eventually be scrapped at a loss. But this process is more costly per copy, as stated above, and the extra cost of on-demand printing ends up being shifted to the consumer in the form of higher prices for each book.
Turning a mass-production industry into a boutique industry is not likely to result in products that will be cheaper to buy, and the availability of an on-demand printer will dictate availability of a book. Nonetheless, on-demand is seen by many publishers as their future: Keeping literature tied to paper, no matter what format it was in originally.
4: The Web—The wild card no one knew was in the deck
Though the Internet had existed as a researcher’s tool for quite some time, it was not introduced to the general public until around 1980. At first only of interest to self-proclaimed computer geeks, the early services of companies like CompuServe served to introduce users to concepts like electronic messaging, discussion forums and online billboards. These services were mostly interesting to professionals and hobbyists at first, but slowly the amount of content expanded and proliferated, and more of the public began to “get online.”
One of the larger entities on the Internet, prior to the web, was the Newsgroup system. Newsgroups were electronic bulletin boards, each devoted to a particular subject, that allowed users to post messages, answer questions, or post documents for others to download at will. Some of the first e-books were exchanged in this way, providing the first-ever potentially global outreach for many an amateur author. Some of the first adverse issues that would face e-books, namely, the illegal dissemination of copywritten material and the circulation of poor-quality content, were likewise first seen here. For a time, the Web and the Newsgroups existed in equal strength beside each other. But as the web has grown, it has pushed the Newsgroups out of popularity, until today they have become what they originally were: Congregating areas for geeks, and largely ignored by or unknown to everyone else.
The World Wide Web’s initial creator, Tim Berners-Lee, conceived of a modest but expandable service on the Internet, available to everyone who could get a modem connection, but he had no idea that it would develop as fast as it did. Maintaining the web as an “open-source” entity, which meant no one organization could control it, left it wide open for companies to market the values of the web, and the advantages to potential customers of reaching it through their servers. Much of this “value” came from content available on other web sites, and much of this was legitimate content, legally offered; however, there was also a great deal of less-than-legitimate content, offered without permission or right by independent website creators. There was also a great deal of more risqué material, text and photographs that ran the gamut from “good taste” to “call the FBI,” available to anyone who knew the link to reach it. The pornographic industry was an early adopter to the new market—early adoption of new media being one of their longest-running and most successful marketing tactics—and a great deal of the web’s initial popularity came from those who were more than willing to seek it out.
The Internet spawned the World Wide Web in 1991, but the real success of the web had already been assured by the establishment of public access to electronic mail. E-mail, the first “E” in the now-massive web lexicon, was soon referred to as the “killer app,” the one thing that encouraged more people to go online than any other single service. E-mail users quickly gravitated to the web when it was introduced, and the ease of using HTML to write web “pages” encouraged professionals and amateurs alike to delve into creating their own websites, available for people around the world to see.
E-mail helped to kick-start the phasing out of paper in everyday use, and the subsequent lessening use of the postal system, delivery services and couriers. This went for the mail itself, of course… though not for the attachments to those e-mails, the documents that businesses and individuals still wanted to print out and hold in their hands. Still, many e-mails became the documents themselves, standing alone without attachments, and despite the insistence of a few die-hards, most individuals and businesspeople stored e-mails electronically only.
E-mail helped consumers wean themselves onto electronic documents slowly, even as the tools to turn any electronic document back into paper persisted. E-mail also helped consumers get used to the beginnings of the global economy, as they could suddenly communicate quickly and cheaply with people from all around the world, and the perceived boundaries between peoples and cultures began to fall. Web-based sales and support helped this along, and as the power of global communications grew, so did the realization that paper was not the most efficient method of passing information about.
~
National and regional governments were slow to join the web revolution, mostly due to concerns of a loss of control and sovereignty; and even today, a number of governments still resist the open nature of the web. Not so with Big Business, however: Where governments saw threats to control, businesses saw promises of profit from new markets, and many of them took to the web like ducks to water. Unfortunately, the open nature of the web left all of these entities wide open to approach things in their own way. The web was too new to have established conventional ways of doing things, and every business thought their way was the best, or at least, the most convenient for them. Commercial entities created multiple methods of transacting business, many of which were incompatible with everyone else’s methods.
Not only was this behavior in some ways similar to the electronic document format wars, but in fact it dovetailed with the format wars to create added complexity to communication and trade, exacerbating an already-serious problem. And just as had happened in the format wars, the creators of the various commercial systems refused to accept the idea that they would be better served by consolidating their methods with others’, or adopting another, more popular system espoused by someone else. There was a “Wild West” mentality about the web, every man for himself, and no amount of reasoning seemed capable of breaking the stalemate.
All of this served to slow progress of almost every commercial aspect of the web, which had the effect of furthering the goals of conservatives who insisted on maintaining a paper-filled, old-fashioned way of doing business. With governments so severely dragging their heels over entering the webscape, and therefore in no hurry to force other entities to modernize with the new paradigm, businesses found it easy to ignore public encouragements to join the paperless revolution and maintain their old-school business practices, if for no other reason because they were not being forced into such an initially expensive venture by the authorities.
~
At this stage, the first of the e-book adopters were trying to make themselves heard, but their pleas were largely falling on deaf ears. As Big Pubs were creating web identities for themselves, and incidentally setting themselves up with an effective system capable of delivering digital content like e-books… they were also declining to make that content available, very often not even responding to public request and inquiry on the matter, and leaving themselves open to the public scorn and ridicule that results in ignoring your customers. E-book desiring consumers created online groups designed to rally support and enact change, but this was a largely futile gesture as far as Big Pub was concerned; they were used to going their own way and giving the public what they felt the public really wanted; and they refused to believe they needed to give the public at-large e-books.
But the e-book groups’ pleas were heard by others. Small publishers and web mavericks recognized early demand for e-books, and they set about satisfying that demand. Independent websites began to sprout, advertising themselves as the alternatives to unresponsive Big Pub and the best place to go for e-books. Some of these sites were able to attract the cooperation and support of established e-book format creators, and became de-facto sites to visit for certain types of content. Others found themselves in need of formats, and in response created their own, adding to the format wars and commercial complication.
The proliferation of these sites was fuelled by the ease of creating an online sales and delivery presence for next to nothing, and taking advantage of the nature of the web creating a level playing field for amateurs and veteran companies alike. Though some of these e-book sites were created by established publishers with existing content, others were created by fly-by-night operations that often obtained content from other sites and repackaged it for their own sales. Still other sites encouraged the public to send material to be published, but much like the fanfic organizations of old, ended up collecting material of varied quality. They put it all up anyway, however, thereby branding themselves early-on as purveyors of low quality or illegitimate content, an image played up by Big Pub to keep potential customers coming to the bookstores. E-book publishers, right out of the gate, were off to a bad start.
This bad start was later exacerbated by the eventual failure of some of those startups. As many of them had pioneered their own e-book formats (and customized reading software for them), the e-book retailers’ collapse often meant leaving the customers with e-books that could not be read on other e-book reading software. As computers and personal devices were replaced, either due to age, failure, or a desire for new programs or more power, consumers realized they could not obtain new versions of their reading software from dead-and-gone vendors. This meant the e-books they had obtained, some of them purchased online, were no longer available to them. Their money had gone down the drain.
This left a bad taste in consumer’s mouths for small-time and start-up publishers, and for e-book formats that might be lost in the future. And since, unfortunately, it was impossible to tell which online stores would stay up, and which formats would remain viable, the move to e-books was further slowed by justifiable consumer distrust and paranoia.
~
As all this was happening, a new and unexpected movement was forming and growing throughout the web: For lack of a formal name, it is generally referred to (depending on who’s speaking about them) as the Free For All, Anarchist, or “Pirate” movement.
This movement was directly created by the open source promise of computers and the web, and partially due to the web’s Wild West nature of freedom and apparent lawlessness. In this atmosphere, early content providers had been making material available at little or no cost to the consumer. In many cases, this tactic was intended to lock-in customers, who would then be charged for access later on (a classic marketing ploy). In other cases, it was because the content was being offered without permission by someone who had purchased or downloaded it from some other source. In any event, there was a lot of both, and web users quickly got used to the idea of getting online content for free. And the online services did not make any effort to warn users that this was a trial period deal that would eventually end.
When the websites switched to the expected pay-for-content models, many web-consumers screamed bloody murder over the idea of charging for content today that was free yesterday. Many of these consumers decided to get back at this seeming offense by re-posting purchased content online, essentially allowing other people to have the content for free. And thanks to the design of the web, one posting could be accessed by literally anyone on the web, meaning that content intended to be paid for could conceivably end up in the hands of millions of people who did not have to pay for it.
Content creators latched firmly onto this “conceivable” scenario and, in purely paranoid fashion, considered it a direct loss of income. They used that as a reason to secure their online content, or to not provide it at all. Neither decision, of course, sat well with consumers, who increasingly demanded the content, and attacked any commercial entity that denied it to them. The ongoing battle over online content availability for free-versus-paid continues, colors every online transaction to some extent, and has been the direct cause of many companies avoiding the decision to make their content available online until the issue is decided—naturally, in their favor.
5: AOL—We are the future
Although the Internet was free to anyone who had a modem, Internet Service Providers (or ISPs) like America Online, Compuserve and Prodigy are largely credited with bringing the bulk of the public to the internet—and they did it through a subscription model.
The secret was pre-packaging: Since most people didn’t know what was on the web, or where, the early online services created encapsulated “mini-webs” devoted to popular subjects, easy to find and fun to participate. The catch was, only a member could access that content; so, for a monthly fee, subscribers could visit the forums of interest, download content, view videos and listen to music, and use e-mail to an unlimited extent. The quick-and-easy web was very popular among newbies, and the online services signed up subscribers by the millions.
Once you paid to be a member, a lot of the content you could access on the web services cost no more… and the services duly described them as “free,” giving the impression that you signed up just for the service and whatever was on it. The “free content” model was also very popular, and as described in Chapter 4, led to the perception amongst users that all web content should therefore be free. Soon it looked like the online services would be the greatest development of the age, the place everyone would be spending their time in the future.
I was one of the early subscribers to the Washington Post ISP: Yes, before the Post became just another online newspaper, it was a full-blown ISP, among the second-generation ISPs that offered an e-mail account, subscriber-only access to content—in this case, some of the Washington Post newspaper articles put online—and access to the budding World Wide Web. In a fashion, it could be considered among the earliest publishing organizations to bring its digitized content to the web, for sale in a subscription model. I thought a news content-based ISP was much better than my previous CompuServe account, with its number-based e-mail addresses (being a fan of the TV series The Prisoner, whose main character proclaimed at the beginning of every episode: “I am not a number… I am a Free Man!”) and to-me-uninteresting subscriber content.
But the online services found themselves fighting to satisfy the seemingly insatiable hunger of their customers. And as the web developed beyond the online services, consumers clamored for access to that, too. It took some time, but eventually, the online services gave it to them. This led to the discovery of even more free content on the web proper, and even more of a perception that this was the way it ought to be.
Within a few years of opening up their services to the web, the online services began to be abandoned by their users, who were slowly but surely finding other ways to track down the content they wanted on the web. As they did, they increasingly asked what they were paying the online services for, especially as other services offered only e-mail and web access, which was adequate for most people’s needs, but for significantly less. Within just a few short years of being the hottest properties on the stock floor, the online services began to die off from lack of interest. One by one, the services scaled down or closed their doors. For them, the future was over after about a decade.
I held out with my Post ISP for awhile, until the fateful day I received an e-mail notifying me that the Post would be converted to another free-for-use website, and their ISP services would be shut down. At the time, this was common enough as well: ISP’s suddenly going under, and their users being forced to scramble to set up new accounts (and e-mail addresses) with another ISP. In the Post’s case, they had not found enough subscribers to make their online publishing model viable, so they switched to an advertiser-based financial model and made their content free to web visitors… not the first, and certainly not the last, to do so.
~
But web visitors were discovering that more of the premium content on the web was now behind a subscription service too: The content providers had followed the examples of the ISPs, and had started charging for content. This sudden reversal fueled the aforementioned free-versus-paid movements, and once again new arguments arose over free and subscription content. The one thing lacking, now that the online services were closing down, was a place to discuss these matters.
Which brings us to another result of the influence of the online services: The introduction of users to non-Newsgroup forums. These forums gave groups of a particular interest a place to talk about their interest, to share information, or to vent. They became one of the most popular features on the online services, and many customers chose their service based on the groups they could connect with on that site. With the subsequent collapse of the online services, the users’ forums were lost as well. Fortunately, many of the users were savvy and dedicated enough to make sure their favorite groups wouldn’t be lost, and they took advantage of the web to preserve them.
So many of the users’ forums were transformed into independent websites, often populated by the same people from the online services, plus many more who were not subscribers to that service, but who were also interested in the group. This gave people a place to continue discussing their favorite subjects, and venting their frustrations about them. In the past, the world was filled with people who felt completely alone in their ideas and viewpoints. Today, many of those solitary people take comfort in knowing that there were others who share their views, and they feel more empowered as a result.
I was one of many who searched through the web for groups that mirrored my interests and tastes, signed up to listen, and often ended up being a regular participant. It was a great way for people, especially those who seemed to have no local peers on a particular subject, to find new people to share their interests with, and to feel part of a larger community. The legitimacy of those online communities would soon come into question, as well as their psychological impact: Did they make people more outgoing, or did they isolate them from the “real world” outside their front door? But for individuals, it established a sense of “belonging” that many of them lacked in the real world, and had the potential, at least, to boost their self-confidence and sense of identity.
The web quickly helped to expand another phenomenon: Web activism. Like the newsgroup users before them, web users discovered there was an amount of psychological power born of anonymity, plus the ease of speaking up or taking a side from the comfort of your home, in using the web. As a result, they were becoming more vocal about their wants and desires, their gripes, their demands, and their criticisms of others, either corporate or individual. This web activism was especially focused in website forums, where visitors or registered site members could start discussions, debate topics, argue amongst themselves, and verbally attack entities with which they had no physical contact or real influence over.
This empowerment helped to create a more forward and assured consumer, one who wasn’t afraid of making demands of its commercial interests (especially when there were a few dozen to a few hundred people backing them up). And the commercial interests themselves had discovered these groups dedicated to, and commenting on, their products; in order for them to know what the public was saying, all they had to do was spend time in those discussion groups, and take their cues from the conversations therein. Many companies received appropriate reinforcement from these groups, in the form of word spread to other potential customers and increased sales, or negative press spread from website to website, often being picked up by the legitimate news services if it went far enough—the groups began to influence commercial policy. It was a web version of the people’s revolution in the purest sense, with the smallest of little guys helping to steer the big fish either into the history books, or onto the rocks.
In the case of e-books, a number of user groups appeared, many guided by interest in a particular genre of literature, or devoted to e-books for a particular e-book format. A few of these groups were even started by publishers of these genres or formats, in order to help gather customers, gain insight as to their customers’ desires, and reach a consensus on how to offer their works on the market. Other commercial entities sought out these groups for the same purpose.
Other groups were formed with a more generalized interest in e-books. These sites did not concern themselves with any specific group; rather, they were interested in furthering and promoting e-books in the marketplace, and helping anyone who wanted to know more about where to get e-books, how to read them, and how to deal with the problems that arose with handling multiple formats, transferring files to different reading hardware, what to do about e-book sellers that suddenly close up and go away, etc. Not only did e-book reading consumers spend time on these sites, so did e-book authors, programmers of e-book reading hardware and software, and publishers interested in entering the e-book field. More than anything else, the coming together of so many parties in one industry helped to develop the e-book market we have today.
But it also showcased the problems with e-books, namely, the dissonant voices of those who demanded e-books for free, and fought against any attempt to limit e-books’ usage in any way. These groups, similarly empowered by the days of free content, had dedicated themselves to a “no compromise” position on e-books, and maintained a vocal presence on the Web, in essence terrorizing those who dreamed of making money off of literature. Their demands made booksellers nervous, fomented debate and argument among e-books’ staunchest proponents, and kept an industry unsure of its future in such a contentious atmosphere.
~
It may seem that bad business decisions by the first ISPs may have led to the creation of the anarchist points of view regarding property and goods available online; and certainly they did not help. However, it would only be fair to point out that the initial users of the Internet, scientists and military men, had used the Internet as a free and open platform, freely sharing data and collaborating on far-flung projects in a spirit of camaraderie and teamwork. On the other hand, scientists and military men weren’t in the business of selling things, and they could have barely conceived the development of the World Wide Web.
The Web, though riding on top of the Internet, was and is a very different animal. Unlike the established open trust atmosphere of the Internet, the Web was left free to develop in any way its users desired. Perhaps there was an amount of naiveté on the part of Berners-Lee and the web’s creators, essentially opening a new frontier and expecting everyone to play nice and cooperate right off. Perhaps it was laziness, or a disinterest in writing a coherent set of laws to govern behavior on the Web, or to figure out ways to enforce them.
But the combination of a lawless Web, ungoverned capitalism through the online services and customers empowered to demand a free ride, has resulted in an online landscape that is non-conducive, and almost hostile, to providing paid content.
6: Computers’ mid-life crisis—The PDA, cellphone and netbook threaten the marriage
Computers had become a mainstay in businesses and homes, and the insistence of conservative business interests made sure that paper was not displaced by the electronic newcomers. For a time, the computer-paper relationship was secure, if less than sensible. But there were new electronic devices that would begin to worm their way into the relationship and create another sea-change in the computing landscape.
Before the computer had made a splash in businesses and homes, consumer electronics companies were experimenting with portable electronic organizers. These early devices were generally credit-card-sized, and designed to hold phone numbers, addresses and notes that were manually entered into their tiny keyboards. They were interesting and handy gadgets for the time, and a few of them allowed the user to do even more than the designers intended: I used one to store and access job-related crib notes to use in the field. But most of these devices were cheaply made and fragile; they never managed to survive whatever pocket I kept them in for very long. Still, they represented the first salvo of a new class of personal electronics devices that would revolutionalize the way people stored, accessed and shared information.
After the computer made its splash in the office and home markets, these organizers came back, as the more robust Personal Digital Assistant, or PDA. They still functioned as organizers, but they were capable of even more—like their bigger computer cousins, they could have third-party applications added to them, giving them new tasks and ways of informing or entertaining the owner. And they were still small, first paperback-sized, then pocket-sized, and much sturdier than the first electronic organizers, enabling them to go pretty much anywhere.
My first PDA was a paperback-sized Casio Zoomer, rebranded by Radio Shack. Similar to Apple’s infamous Newton, the Zoomer, with its black-and-white LCD screen, was not fast or pretty, but it allowed me to write text documents that I could later import into WordPerfect (my first novel was written on the Zoomer), as well as store images, keep notes and spreadsheet records, contact data and appointments, translate words to other languages, perform complex calculations, and track my spending to sync with Quicken when I got home. It allowed me to do things that would have required me to carry a notebook with me wherever I went. It was a great tool while I used it, and a good starting point to get into the early world of PDAs.
The PDAs represented the next real threat to paper in the marketplace, especially when they were capable of being synchronized with computers. Now, instead of printing out information on, say, a company contact, you could download that information into a PDA and take it with you. You could even edit that information, and when you returned to your computer, the new information would be uploaded to replace the old information automatically. Although the first PDAs were sold as simple organizers, new applications were constantly being developed to allow users to download more documents to the PDAs and use them on-the-run, giving them more opportunities to skip the printing step altogether. This was fine for personal use, but businesses still insisted on their paper trails, so the PDAs had little impact on office paper use at its base; but it did begin the process of relegating more and more paper to within-the-office use only.
The pocket-sized PDAs were effectively led by the Palm Pilot, a well-built little device that could synchronize its data with a computer, making it easy to load the device with notes, addresses, phone numbers, and anything else you could think of. Businesspeople latched onto the PDAs quickly, as they provided even faster access to the information that they used to store in their old organizing binders, were easier to update, and could be backed up with the computer, allowing the data to be recoverable if something happened to the PDA itself. And as businesspeople realized the same organizer functions had uses outside of the office, they were soon using them everywhere, and driving the casual consumer adoption of the devices.
Programmers also latched onto the Palm Pilot, because of the comparative ease of writing their own customized applications for them, and because of a support organization that supported the sharing and proliferation of such programs that expanded on the Pilot’s value. Some of those programs included small applications designed to read customized text files… like e-books. Today some aficionados define the first “true” e-books as the ones that were written for the Pilot and other portable devices.
The Palm Pilot was one of a number of PDAs on the market. Though the Pilot was the most popular at first, Microsoft had a vested interest in dominating the market with its own PDAs, running scaled-down versions of the Windows operating system. Many companies that produced Windows-running PCs got into the game of selling PDAs that could be used on their PCs to extend their use beyond the desktop. I eventually moved to a Windows-based PDA, in order to get access to the many third-party applications were being created for the Windows devices, and to easily connect to my computer and organizer apps.
Other companies also sold PDAs, seemingly running as many different operating systems as there were companies offering them. As time went by, the dominance of one PDA brand or another would shift about, creating a constant state of confusion among programmers as to which platform would be the best (read: most popular) to program for. While some programmers could adopt their programs to multiple platforms, some of them were developed for only one, leaving the other platforms cold.
And as many of the first e-books, and their reading software, were being developed during this period, programmers’ confusion resulted in multiple e-book formats, multiple reading applications, and designed to run on multiple operating systems. There was an e-book format designed specifically for the Palm platform, and one designed by Microsoft for Windows. In addition, there were third-party formats designed to run on both platforms, and more besides. The downside, of course, was that none of these differing e-book formats could be read by another format’s reader application… my Windows-based PDA couldn’t read my friends’ Palm-formatted book unless I downloaded new software to my PDA, inevitably resulting in my downloading a half-dozen or more reading apps to cover every version of e-book I might read. It was expected that, sooner or later, a single format would probably rise to dominance, and all e-books would be created in that format in the future. But as no one could reliably guess which platform or format would become dominant, programmers stubbornly held onto the format they had started with, and kept going.
So, at the beginning of the ascension of PDAs, e-books were already developing into a mish-mash of non-interchangeable formats for non-interchangeable operating systems… what David Rothman, writer and e-book enthusiast, would one day refer to as the “Tower of e-Babel,” in reference to the ancient story of a grand tower planned by the leaders, but which eventually fell because of a lack of communication between the leaders and their builders. This variety of formats gave e-book producers and publishers pause, as most of them did not want to have to figure out which format or operating system would achieve dominance, either; nor did they want to produce their books in as many as half a dozen multiple formats, with the inherent extra work to create and proof each version of the book, and to keep track of each one in the sales stream. Producers and publishers couldn’t be blamed for this attitude: They were booksellers, after all, not programmers.
~
Before a format consensus had been reached, PDAs suddenly found themselves being replaced by an unlikely product: The cellphone.
The first cellphones were fairly simple devices, designed to make phone calls… and that was about it. However, the early cellphone market was being built on the idea of having fairly similar phone plans and services, so manufacturers would entice buyers to spend money on new phones (which incidentally served to lock people into service plans). In order to get the public to buy a lot of phones, manufacturers began loading them with extra features. The first phones had basic phone-centric organizer programs built into them. But as time went on, phones became more and more sophisticated, doing everything from playing music to providing maps to get you from place to place.
When cellphones were combined with PDAs like the Palm and Windows devices, including their ability to load custom programs, the death-knell had sounded for the PDA market. Today, not every phone has all of the capabilities of a PDA, but a number of them do. And although PDAs had seen modest sales since they were introduced, pretty much everyone wanted a cellphone. As cellphones saw meteoric growth, the PDA market, which had never quite hit its stride, was already burning out in the cellphones’ backwash.
You can guess what this meant: More e-book formats and reading applications. More cellphones were being designed with new, non-PDA operating systems, and cellphones were operated differently than PDAs. Programmers were faced with providing new software for these devices, optimized for the twelve or so buttons, and in the beginning, lack of touch-screen control, of a cellphone. And even in that realm, cellphones could be very different from phone to phone. Even the programmers were beginning to balk at those prospects. As a result, the Western cellphone market would see very few e-book reading applications created for them in most areas.
I specified the western market, because in the Eastern market—Japan, India, China—exactly the opposite was happening. In the East, more people used cellphones for more tasks, because many people could not afford a computer in the East. But Easterners like to read, and in dense urban areas, where masses of commuters squeeze onto trains to go back and forth to work, having a portable reader was highly desirable. Programmers there created e-book formats optimized for screen-equipped cellphones, so anyone with a cellphone capable of connecting to the Web could access these books. Of course, this was one more format for the Tower of e-Babel, and one more point of confusion for publishers.
~
Cellphones were debatably tolerable for surfing the Web… but they were small, and sometimes Web content didn’t display well on those tiny screens. They were also hard to use for other computer-like tasks. PDAs were a bit better … but they were getting harder and harder to find. And sometimes, a computer—even a laptop—was too much for a simple job, or larger than what you wanted to lug around. Enter the netbook, and more of the same format thing all over again.
Netbooks were designed to be mini, limited computers. They were not supposed to be as powerful or versatile as a full-fledged computer, but were supposed to do a small set of tasks well. They were also cheaper, smaller and lighter than laptop computers, and designed to run longer on a battery charge. Some expected that these devices would be immensely popular with consumers, and fly off the shelves. In fact, they have done reasonably well, but they have not captured the fancy of most consumers, specifically because of their limited scope and lack of processing power.
As netbooks were closer to computers than PDAs and cellphones, it would be expected that the e-book reading applications that had been designed for computers would work on the netbooks. The problem was, very few e-book reading applications had been written for computers; in fact, the Adobe PDF document was the only e-book format that was regularly read on computers and laptops, with all other formats coming in at a distant second, if at all. Again, programmers were being expected to create reading applications for these platforms, and by now, e-book app programmers were getting sick and tired of being bothered by every new device that came along. Very little effort was put into creating e-book reading apps for netbooks, and today, many e-book formats are seriously underrepresented in the netbook area… just as they are underrepresented on cellphones.
All of these devices—PDAs, cellphones, and netbooks—were doing their part to erode the relationship between computers and paper, as all of them could join the computer and make their contents portable. Users, in return, were slowly but surely discovering the benefits of foregoing paper for many daily uses, and increasingly, they were turning to e-books as another way to cut down on the amount of paper in their lives.
But even as more of the public reached out for e-books and other paperless apps, the myriad of operating systems and formats, and concern over whether the Next Big Thing would somehow sever their device’s ability to obtain e-books, kept many potential e-book buyers from taking the plunge, and many producers from bothering to help. The Tower of e-Babel was too large and daunting, and its future was too murky. Most people felt more comfortable opting out, and sticking with paper.
7: The programmers—The ME generation
Chapter 6 described how the myriad formats of the first PDAs, cellphones and netbooks caused confusion in the marketplace.
The confusion was not simply at the hands of consumers, who had the seemingly daily decision of which brands, operating systems or designs to support. Computer programmers were feeling the pressure as well. Vendors wanted them to write versions of their programs for all of those operating systems, brands and designs. Though it was an exciting time to be a programmer, it was also a frustrating time. Creating a program often meant creating various versions of it for very different computer operating systems, but with the requirement that each OS’s version look the same, work the same, and maintain full compatibility with all other versions of the program, including those written in the past. Even the simplest programs could take months to years to generate multiple working and backward-compatible versions. And vendors often used questionable logic, and hid behind a lack of understanding of the issues, to demand those multiple programs for less than the operative cost of the appropriate programming work involved. But the alternative, turning out a program for only one operating system, meant alienation from a major portion of the market, derision from customers who used all the other programs, and damage done to a company’s reputation in the marketplace.
So programmers were working hard, and much of that work was repetitive and frustrating, underpaid, and—let’s face it—uninteresting. Most of those programmers had gotten into programming in order to write programs for fun things like games, and cool things like orbital re-entry plots. Instead, most of them were writing programs designed to copy Line A from a spreadsheet into Line 2 of a word processing program. For multiple operating systems. At a payment equal to the cost of one program. Simply speaking, that wasn’t what most of them had signed on for.
~
The programmers of the early consumer computer age were late Boomers, who were trying to distance themselves from Boomers in the first place, and the generation that followed, Generation X, also often called “the ME generation.” These ME programmers saw the incredible potential of computers, and they wanted to be part of that, partly because they expected the developing profession to pay well, and partially because it meant being creative and having fun while they made money. The ME generation was dedicated to fun, and money, and most of the generation believed the mantra that “if you love your job, it will not be a job.” They threw themselves into the things they loved with abandon, took risks, and often succeeded on the strength of their enthusiasm alone. It’s no wonder that these people made up a significant part of the dot-com boom in Silicon Valley.
These programmers would willingly work on their office projects into the wee hours simply because they enjoyed it… then they would go home and work on their own programming projects for the same reason. Many of them cut their teeth writing computer games, the latest sensation that had developed from such simple video games as Pong, and was quickly becoming a major industry. Further, they were inspired by the graphics interfaces of the new consumer- and business-oriented computers, and they wanted to create games for those.
But there were only so many games-writing and NASA consulting jobs to go around, and a lot of programmers out there. Naturally, a lot of those people ended up in what they considered less-than-ideal jobs.
So they did the less-than-ideal work, for the paycheck. But on their own time, they continued to work on their fun personal projects, doing what they wanted, and maybe hoping that someday, one of their creations would give them their Big Break into a better job.
~
This is where e-books enter into the programmers’ world. Programmers were among the first e-book writers and readers, as they were already in the habit of documenting their work on electronic files, and sharing those with others who used their programs. So they showed an early interest in digital texts and manuals related to their work. Programmers also tended to be geeks… so they showed an early interest in the fanfic that was developing, and, in fact, wrote some of it themselves.
At first, they used the same tools that they had used to write their manuals, i.e., basic text formats, a few of the more popular word processing programs, and Adobe’s Acrobat format. But as the PDA was beginning its rise, many of the programmers showed a quick interest in moving their electronic texts to the tiny organizers, either for utility’s sake, or just because they knew they could. The original organizers’ applications were okay for small collections of text (some of them had word or letter limits that kept a single page to no more than the equivalent of a letter-sized page or two of text), but of course, manuals and fanfic were longer than that. Fortunately, the organizers were open in format, like computers, so programmers could obtain the hardware’s specs and write their own programs.
The ME programmers took to this with abandon, along with programmers who had made the same discovery about organizers in relation to other tasks. Soon groups, and then companies, were springing up, driven by programmers writing applications to do all kinds of things with handheld organizers, from reading texts, to computing auto mileage, from playing customized games, to computing orbital re-entry plots, and everything in-between.
As mentioned in Chapter 6, PDAs were sold by many differing companies, and had many operating systems and control interfaces, meaning one program would not automatically work on every piece of hardware. Some programmers were willing to rise to the challenge, and create versions of their applications for multiple operating systems. But other programmers did not want to be bothered with multiple operating systems and interfaces: Many of these had latched onto one OS or interface, declared it “perfect” in their eyes, and therefore the only one worthy of their time. So many programs initially went no further than their original iteration for one OS.
But programmers were not all of the same opinion of operating systems—one person’s “perfect” OS was another person’s “crap”—and while many of them appreciated the features of a particular program written for another OS, they still wanted to be able to use it on their own OS. So programmers started writing their own, often unauthorized, versions of other people’s applications, in order to port it into the OS of their choice. Some of those programmers were good, and very thorough, and managed to duplicate the features of the original application almost to the letter. Other programmers, perhaps more lazy, or perhaps not so enamored about certain features, replicated some functions of the original application, but not others, and occasionally added a feature or two of their own.
This activity was resulting in a horrible mish-mash of applications across multiple operating systems, some compatible with their original app, some not, and some barely recognizable from their origins. As well, those multiple applications required specialized text files to be read, and many of those text files could only be read by one application, and in some early cases, by only one operating system. E-book consumers, interested in the application that contained the most features they wanted, would buy the operating system that supported the apps they wanted—though this often conflicted with the desire for other applications and their features, and forced users to assume a “this OS does the most things for me” attitude when choosing their hardware and software. The market was becoming heavily divided amongst operating systems, application versions and text formats, right at its beginning.
As PDAs were supplanted by cellphones, only a few of which could run a reading application or display a text file, some programmers responded to the new market with new versions of their reading programs. Most did not. In most cases, only those cellphones that ran a version of an operating system that already had an e-book application written for it, were able to read e-books. This led to more of the same fractionalization, especially as consumers now had to choose the cellphone according to the available brands and operating systems of their provider’s phones, which were not shared by each provider. Providers with popular phones initially did well, and when cellphones began to incorporate the more advanced organizers, those phones with a popular OS helped to drive the providers’ success in the market.
Many people who had not previously experienced e-books were now discovering the possibilities of reading them on their cellphones (especially in the East), and were further swelling the ranks of e-book fans. But others were discovering that their new hardware did not support the e-book applications they used on previously-owned hardware, perhaps with a different operating system. While some were thrilled with the new possibilities, others were angry that they could no longer read the books they had obtained (and in some cases, paid for) on their new devices.
~
As new software companies were developing and looking to take advantage of existing markets, a few of those companies took notice of the fledgling e-book market, and the small-time or struggling nature of the applications and formats involved. A few companies quickly came to the conclusion that the small-time players were “doing it all wrong,” and that they could do things much better. They also believed they could profit on a successfully-entered e-book marketplace. So these larger companies threw their hats into the e-book ring, starting at the bottom to create their own applications and e-book formats, in their ideal of the “perfect” e-book format. Once again, programmers were jumping the familiar multiple OS, multiple interface hoops, though at least now they were getting paid much better to do so.
Other companies believed an existing e-book format was “ideal,” but it had suffered from not having the correct marketing strategy, or money available to market it. As they felt they could transform these “also-ran” applications into gems, and incidentally make themselves a tidy profit, they bought up these small-time companies, or licensed the applications and formats from the programmers that created them, and put their marketing plans in gear. Programming teams in this situation often found themselves being traded like football stars, or discarded as the new company’s programmers were given the green-light to continue on without them. And some of the legacy applications were suddenly being altered by their new parent companies, when it was decided by some higher-up that “this was good… but that will make it better.”
Using their marketing might, they threw major promotional campaigns behind their formats, mostly aimed at other businesses—the book publishers and sellers—in order to secure a distribution chain for their products. At that particular time, tech companies were riding high in popularity and profits, and were good at convincing other companies (and investors) that they were the future, and could do no wrong. Many publishers and sellers signed on to work with these tech companies, and visions of dollar signs began dancing in everyone’s heads.
Unfortunately, there were issues beyond the formats and applications that had not been carefully considered by any of the parties involved: They faced an uphill media campaign convincing consumers to sign onto the new applications and formats, as consumers were already smarting from changing operating systems and hardware and losing their existing e-books; there was a lack of actual books prepared for the new or altered format, and publishers were still refusing to release their books in e-book formats; there was no infrastructure in place to handle sales and distribution of electronic files; and there was the concern of file security, the notion that customers would resist the idea of paid digital content, and perhaps share paid content with others, robbing them of their expected profits.
All of these problems were dropped quickly in the laps of the programmers, who had shown no interest whatsoever in these problems in the past. Further, lack of understanding of the complexities of the problem by the upper echelons of the companies, ineffective communications between them and their programmers, and a lack of effort in trying to find out exactly what consumers wanted, exacerbated problems that developed in trying to create the perfect consumer market for e-books.
In the end, decisions were reached that were often ill-informed, programmers were given nigh-impossible tasks, and marketers would take a “this is what you need” (as opposed to “tell us what you want”) approach to marketing. The corporate e-book formats did take off, mostly with e-book newcomers (of which there were still many), but the veteran e-book consumers mostly shunned them in favor of older applications and formats. The problem with that was, since most of the programmers who had created the old formats were now working for others, and now found themselves with significantly less time for personal projects, the old formats and applications weren’t getting the support they needed to continue to thrive. One by one, the old formats dropped to the wayside, with no changes or improvements to the applications, no new operating systems getting their version of the apps, and fewer and fewer avenues of support available.
The ME generation had moved on. The “Tower of eBabel” was tall, strong, and looking more and more likely to collapse every day.
8: The literati—The peasants are revolting! (You can say that again.)
(With apologies to Johnny Hart)
Unprofessionally-published literature and amateur writing based on popular or cult subjects of the day—what has been referred to over the last few decades as “fan fiction,” or “fanfic”—has been held at arm’s length by most of the public for most of its life. As described in Chapter 1, Big Pub is partially to blame for this, as they have actively promoted the idea that “if we didn’t publish it, it must suck,” and that idea has become part of the modern assumption about non-publisher work.
There is another reason that must be acknowledged, of course: The fact that much of that fanfic did, actually, suck. Fanfic is fanfic, whether it is written by a professional author on their own time, or by a twelve-year-old who should’ve been forced to spend more time in English class before being let loose on a keyboard.
But even among the lemons, there were always apples. Many people who wrote fanfic of one kind or another went on to become professional writers, because they really could write well. This aspect of writing rarely gets mentioned, though, because publishers wouldn’t want their new writers to be associated with the very fanfic they insist is all so very bad. It also allowed publishers to remove the good writers from the fanfic world, leaving the bad writers there, and thereby strengthening their position that all fanfic was bad…
At any rate, this vicious circle did not used to matter, because one of the other common aspects about fanfic was that it rarely got much exposure. This was due to its amateur nature, and the fact that amateurs don’t usually have the money to print and distribute their writing very far. Generally, fanfic writers were known to the fans in their immediate sphere of influence, which was usually a fan club or its chapter, and possibly their immediate family and circle of friends. In short, most fanfic didn’t go anywhere.
All of that changed with the introduction to the public of the Internet and the Web. Suddenly, a club made up of a few buddies that meet in the basement every Thursday could become huge web-based organizations with international followings. Newsgroups, e-mail and discussion forums allowed a single voice to reach thousands, even millions of people, where it had only reached a dozen before. Into this world, fanfic would evolve to establishing a global scope, and scare the bejeezus out of the literary world.
~
Fanfic was a perfect candidate for the first e-books: They were available; they were free; they often represented a pop subject or theme; they often catered to consumers who were a bit less discerning of quality versus the subject; and did I mention their being free? E-books did have a bit of a learning curve, but most fanfic writers were more than willing to learn their way around a piece of software that would get their writings out there for millions to see (potentially, at least; in most early cases, they were lucky if their audience numbered in the dozens). And a fringe benefit of e-books was the small devices they were being read on: PDAs, unlike any other kind of delivery mechanism for literature, had the distinct advantage of privacy, because it was almost impossible for others to see exactly what you were reading. This meant that the fanfic that might be too embarrassing to let your buddies, or your girlfriend, or the dozens of strangers on the train around you, see you reading… was an embarrassment no more. You could read away, because no one else had a clue whether you were reading Captains Courageous or Captain Kirk and the Orion Slave Girl Harem.
Regardless of what people thought you were reading, it was soon well-known that the assortment and quality of e-book material out there was more heavily skewed towards Captain Kirk than Captain Hornblower. This suited Big Pub just fine, as it gave them more reason to downplay and ignore the fledgling digital format. And they quickly passed the word onto their closest allies: The literary critics and columnists that were such a major part of their promotional machine; and the literary buyers, those who appreciated a professionally-produced piece of literature, the kind of people who carefully and lovingly displayed shelves of hardbacks in their studies and living rooms.
The party line was that e-books were all written by hopeless, pimple-faced kids, uber-geeks working from their labs or their mother’s basements, and teenage girls with pop singer crushes. They were not to be taken seriously, they were their own punchline. E-books became a new indicator of writing quality in the public eye and the business world: If you were only out in e-book, it could only mean your writing sucked. In many cases, publishers would even pass on the opportunity to publish good copy if it had already been released as an e-book, and a major reasoning was the “stigma” that existed on e-books, and the concern that it would negatively affect sales of a related printed book.
All of this eventually percolated down to the writers themselves, who began to realize that if they wanted to be taken seriously by the publishers to whom they hoped to sell their books, they needed to avoid e-books, or be tarred by the same brush as the trekkies and pop rock fans. This kept many authors away from e-books, and the possibility of getting their work into the hands of others… there is no telling how many literary works may still be languishing away in slush piles the size of Citizen Cane’s library, which might instead have been delighting readers the world over as e-books.
~
In this realm, the agent was as important as ever: They were the gatekeepers to Big Pub, as far as authors were concerned. As e-books developed, and new writers started to come seemingly out of the woodwork, it was the agents’ job to find the “diamonds in the rough” and bring them to Big Pub for their chance at “becoming professionals” and selling “real books.” As the preponderances of parentheses suggests, these euphemisms were designed to convince writers that they were not considered anything but amateurs, no better than the John and Jane grocery-list-writers of the world, and would not be considered good and truly legitimate until they had gone through Big Pub’s machine and become Professional Authors.
Doubtless there were as many authors as agents who did not truly believe this. On the other hand, one of Big Pub’s strengths was the significant income they could generate through producing and promoting a work, an income much larger than a self-published author could expect. So, for the lure of a larger paycheck, and despite the lack of veracity of the party line, authors continued to queue up for entry into the Big Pub machine, and the agents (who got their cut right after the publishers) were more than willing to begin filtering authors along.
The agents were always in business to play both sides against each other, while they raked in their profits. But euphemistically speaking, they were in bed with the publishers, not the authors. Agents knew what publishers were looking for, so they could either spot those authors who embodied that, and send them along, or they could mold and shape an author into what publishers were looking for, and then send forth their Eliza Doolittles and make their commissions. But as the ultimate goal was to give the publishers worthwhile authors—since it was reasoned, after all, that the money would be generated by the efforts of the publishers—the agents were always looking out for the publishers more than for the authors. This publisher’s bias meant that agents essentially shared (or aped) the same views about e-books and their authors as the publishers. Agents would actively discourage authors from digitally publishing, not because they did not expect e-books to be successful or widely-read, but because of the adverse reaction they knew they would get from the publishers.
So, Big Pub told everyone else how to think; agents acted as Big Pub’s gatekeepers, telling authors what to say and do if they wanted to get their audience; and authors bought into it, especially if it meant getting into the inner circle and getting the ultimate payout, fame and fortune. The Castle mentality was especially strong, within and without the industry—even those who were not directly involved in the industry had the clear feeling that Big Pub and their agents thought of consumers, and the writers on the outside, as worthy only of their contempt. But somehow, that elitist attitude was seen by many as being beneficial, because those “elite” publishers were screening out the trash, separating the wheat from the chaff, polishing the coal into diamonds, and all that, and bringing Good Books to the masses. And authors wanted to be a part of that elitist world, because of the financial benefits it promised. So, publishers and their agents were largely allowed to get away with treating outsiders, even potential collaborators, authors and artists… as peasants. How revolting.
~
Despite the self-perpetuating name-calling and class separation going on in the Big Pub world, a few more forward-thinking publishers were willing to stand up early and say, “E-books aren’t crap for peasants. Let’s prove it.”
Harlequin Books had been a successful publisher long before anyone had ever heard of e-books. They had an international following of dedicated fans, and even though their romance literature was not considered High Art, it was a popular, well-known and respected brand in the market, and considered one of the most successful publishers in the world.
When e-books began to develop, Harlequin saw an opportunity to expand its audience into all the PDAs beginning to float around the marketplace. It was an ironic fact that, for all of Harlequin’s success and popularity, the titillation factor and “chick-lit” image inherent in their many titles and imprints meant that many readers felt embarrassed to be caught reading a Harlequin book in public, not unlike the image issues faced by fanfic and pornography readers. But as mentioned earlier, the PDA had largely removed that image stigma and provided a private way to enjoy your literature of choice… and Harlequin picked up on that fact quickly.
Harlequin soon began to publish its printed works in e-book formats as well, and quickly set up an online sales site. It did not take long before they began to enjoy significantly increased sales thanks to their e-book formats, and especially in their older books that were fairly hit-and-miss to find in the used bookstores, but much easier to find on Harlequin’s web site (and they had, to put it appropriately, an attractive and desirable catalog). For Harlequin, e-books were a runaway success.
This fact was surely galling to other Big Pubs, because whatever they may have thought about Harlequin Books, publishers are essentially about making a profit, and they couldn’t deny the significant profit Harlequin was making off of those peasants—excuse me, paying customers—who didn’t seem to mind e-books at all. But instead of trying to replicate Harlequin’s success, Big Pub seemed adamant in maintaining its position, sure that their perseverance would be vindicated someday. And it continued to look down upon “pop” publishers like Harlequin… even if Harlequin profits over them many times over.
Harlequin, in the meantime, is taking its “pop” image and laughing all the way to the bank. And its customers, glad they are being treated as people and not peasants, are happy to carry them there. It’s an odd tableau: Harlequin and a few similar publishers are being carried triumphantly into the future by their happy customers; while Big Pub sits in its castle, pretending not to notice the commotion, wondering why no one is carrying them along, and demanding more tribute from their peasants in thanks for another wonderful day.
While some publishers are showing the foresight to follow after Harlequin, to see what they are doing so right, there is no telling how long, or if ever, the other major publishers will take to respond to the obvious success of the proven e-book business models available to them. They remain the boat anchor on e-book’s journey to success, continuing to drag behind until either the anchor is withdrawn, removing the resistance… or the cable breaks, and leaves the anchor behind.
9: The anarchists—We will bury you.
As described in Chapter 5, the open and relatively lawless beginnings of the Web, combined with the early business models of online services like AOL and CompuServe, introduced the public to the idea of getting content from the web for free.
The idea of giving away free content to justify charging for something else (in their case, an e-mail account and use of the online companies’ servers) was certainly not invented by the likes of AOL and CompuServe. It’s an ages-old and psychologically-sound marketing gesture designed to put the vendor in the customers’ good graces, thereby making them more sympathetic to the vendor and their desire to make an honest buck, and by extension more willing to pay for the vendor’s product. It is a balancing act for the vendor, who has to gauge how much and what type of merchandise it can give away without hurting profit, which can be further influenced by customer reaction to the free merchandise itself. As a business tactic, it also tends to work better with vendors that have a very public and human face, such as a local grocer or baker, who sell partially on their personality and therefore want to present the nicest personality possible to their customers.
This marketing method has one drawback, however: It almost always hurts business if you discontinue the practice. Customers tend to feel cheated, ironically, and resentful. This is why the marketing tool works best with storefronts with a very human face, because that human face will have the chance to explain their situation to customers, and are more likely to appease them through direct human contact. When corporations do this, however, there is no approachable face to confront, only a spokesperson seen on a TV or website, and who very often is simply an actor playing the front-man for the sake of appearances. Customers cannot approach the corporation, so they cannot be appeased, and they react by withholding their money or buying elsewhere. Or, in some cases, they will resort to stealing the vendors’ content, more out of spite than of genuine need, and cost the vendor some of their profits.
The online services thought they could take advantage of a wealth of web-based and customer-based content that they themselves did not have to create, and offer it all free to customers. But when those same services began to cost money, the online services had no human face to present to the customers, and so they felt a backlash from their customers. Instead of being sympathetic to the online services’ plight, the customers left in droves, and often used their new Web connections to verbally abuse their former vendors and urge others to avoid them. The online services had themselves created the beginnings of an anarchist movement that would sweep across the Web and hang like a shadow over all Web-based content providers.
~
The Anarchists were not simply people who wanted more free stuff. Well… actually, they did just want more free stuff. But they made an honest effort to argue that they were not just demanding free because they were greedy bastards; No, they reasoned that they were both inspired by the potential of the World Wide Web, and mindful of a reality of digital documents that made them unique among all the products in the world, which justified demanding stuff for free.
The potential benefits involved the Web’s global nature, and the fact that information could be shared with someone on the other side of the world in seconds. Digital data could cross the old boundaries of space and time seemingly effortlessly… and they could cross political boundaries, too. The Web represented a communications tool that might transcend culture and politics, finance and disadvantage, and finally unite the citizens of the world directly to each other, across geographical boundaries and over political proxies. Web users saw this as nothing short of evolutionary, and indeed, many new and existing legitimate social, cultural, political, environmental and personal movements quickly moved online to create international identities and further their agendas.
The reality involved the nature of digital documents. Most other products in the world were tied to what was traditionally considered a tangible physical object. Before the Web and computers, most products were grown or manufactured, and one product was one product, period. An apple was an apple. Five automobiles were five automobiles. Since Mankind had begun to trade, transactions were based on objects, and the effort it took to create those objects. Materials costs, manufacturing costs, labor costs, transportation costs, and expected profit were all broken down to a cost per item, based on the number of items the listed costs produced.
Digital documents were different. To begin with, digital documents were essentially organized data, a series of ones and zeroes in a storage medium. When the ones and zeroes were called forth, they became a series of electrons in a circuit that were expressed on display screens or through speakers. Although it took a physical object to actually store and display a digital document, the document itself was not a physical object in the traditional sense. (Or even in a non-traditional sense: You couldn’t even call a digital document a “cloud of electrons” or somesuch, since the storage of the data did not depend on a specific medium, but could be stored in any number of mediums, and would be in a different medium when it was called for.) Digital documents were stored ideas, without physical substance, in the truest sense of the word.
A major aspect of these stored ideas was that, since they were physically insubstantial, they could be replicated ad infinitum by the miracle of electronics without the requisite increase in physical mass that would be involved with the replication of, say, a book. A single memory file, an e-book, could quite literally be replicated and sent to every person on the planet, and it would essentially be the same file for every single person as it was for the original holder of the file. The specifics involved will probably keep physicists and philosophers in deathmatch-style debates for decades, but the upshot is that the cost of all that replication only amounts to the relatively small amount of electricity used, and no other physical cost.
(In point of fact, it is possible to establish a physical measurement of digital documents… you only have to delve into quantum physics to do it. I’m not sure if that means accountants would need to learn quantum physics, or if physicists will be working in accounting offices in the future, but either way, the very idea of accountants and quantum physicists in the same room together frightens me. And I suspect I’m not alone.)
Digital documents broke the established molds for physical products to pieces. With digital documents, one document could create an infinite number of replications at virtually no cost and in virtually no time, shattering the manufacturing costs and time-to-shelf paradigms. The cost of communicating the document was so miniscule as to be laughable, and the amount per document only shrank as more documents were sent out, thereby destroying the transportation paradigms. Digital documents required no appreciable physical space to store, rendering moot the warehousing and storage paradigms.
And all of those paradigms had formerly been the ones used to establish the cost of an object. At once, digital documents had reduced the required replication costs to—virtually—zero. The only thing left was the desired profit asked for by the author or vendor—and consumers had their own ideas about that, too.
First, was the logic about attaching a specific cost to an item that could be endlessly replicated at virtually zero cost. Some consumers reasoned that an author who charged a dollar for a book, could somehow (and this part was never considered by some as theoretical, but assumed as given) sell millions-to-billions of his zero-cost-to-replicate book, and become a millionaire or even a billionaire overnight. This idea did not sit well with some consumers, who expressed concern about it being so easy for one lucky person to earn a fortune… presumably, when children were starving in Africa, Indians were living in cardboard boxes, etc, etc. They demanded limits be placed on the amount a single person had a “right” to earn, to prevent the riches of the world ending up in the hands of a few greedy Capitalist authors. This faction was never able to describe how such a Utopian system would manage to produce any profit to the authors at all, but they insisted that it was the only fair way to sell books to the masses, that the money that might be made by the author was completely incidental, and that profit should not in any way impact the author’s desire to write and contribute to the world’s literary riches. The opinion of non-creators was that the ability to simply create should be enough to satisfy all creators.
Though the logic of this approach was flawed, it nonetheless won the support of many consumers who, beyond their altruistic trappings, were simply looking for more cheap-to-free goods. Authors, understandably, were less than impressed, considering it a double insult to be told that they had no right to have a say in their earning a wage for their work, and that they should just be grateful for the opportunity to get their writings out. Other proposed compensation methods were suggested, possibly the most popular being an endowment to the creators given through taxes, and based on the popularity of a work. The other proposals have run the gamut from questionable to crazy, and so far, no one has developed a concrete plan for these far-reaching proposals, nor have any corporations or governments indicated a willingness to even study the matter.
So it was left to the creators and consumers to argue the details, but neither side was willing enough to actually listen to the other side for any discussions to be considered arguments… more accurately, they were unilateral demands thrown back and forth, and it seemed only laryngitis would ever stop them. An incredibly adversarial rift was developing between the authors and the consumers, and with no publishing middleman to mitigate the issues.
~
Another consumer idea about profit had to do with… ideas themselves. This was an extension of a practice that had begun in the 1700s, but was becoming increasingly contentious in recent years… the concept of copyright. (Chapter 14 describes the history of copyright, in relation to e-books, in further detail.)
Prior to the 1700s, the average person was usually too busy working the fields or the store to do something as energy-intensive as writing. Thanks to the Gutenberg press, most writing could quickly be copied by others, so there was little profit in such a venture. The concept of copyright was designed to allow the creator of a written work an exclusive right to profit from that work, for a set period of time. The intent was to encourage creators to create, at a time when Europe and the Americas wanted new ideas to develop and flourish.
The copyright concept accomplished exactly what it was designed for, encouraging creation of new literary works, promoting those works, and earning a due compensation for their authors. The laws also helped to correct a number of irregularities that countries took advantage of, to reproduce the work of another country’s author without compensation to them (a popular example is the work of Charles Dickens, who was reprinted and sold widely in the United States for years without due compensation to him or his estate).
Until the late nineteen-hundreds, there was little reason for complaint or grievance against this concept, amongst creators or consumers. But the events surrounding a cartoon mouse soon began to change the atmosphere of copyright, and its status and value in the eyes of consumers.
The world had grown up enjoying the antics of Walt Disney’s Mickey Mouse for two generations, and although the trappings of the multiple amusement parks were often considered ostentatious (or simply strange) to many visitors, Disney’s many creations were beloved worldwide. So it was with considerable shock and disdain that the public discovered the Disney Corporation’s lawyers had been working successfully to have copyright law altered, to allow the corporation to maintain absolute control of Mickey Mouse beyond the point at which copyright law would have placed the character into public domain.
It was accurately realized that it had been due to the sheer financial might of the Disney Corporation that such an alteration to a government law had been enacted, and purely to serve their financial interests, in what should have been a clear violation of copyright. Not only was Disney damned for such an act, but the United States government was implicated in that damnation for accepting Disney’s money and changing the laws. And it didn’t stop there: Other Disney creations, and the characters they had borrowed from public-domain fairy tales and European authors to make their animated features, would also have the same protections, allowing Disney to profit from their sales in perpetuity. Other smaller but similar acts in other countries suggested that this was a global problem that needed addressing, before the corporations managed to collect and keep all intellectual property, and keep the public under its stranglehold as it meted out its wares.
Thanks to a loud public backlash against copyright law caused by the Disney actions, the groups who were aware (actually not a large group, but very vocal) wanted to scrap copyright, claiming that it was completely corrupt and useless. They took advantage of the freer socialist movements in Northern and Eastern Europe to proclaim that “Ideas should be FREE for all,” and held up Disney and Capitalism as example of what would happen when ideas were not free.
This came at just the wrong time for e-books, which, because of their easily-duplicated nature, could use the protection of law to help protect the interests of their authors. The concept of copyright law was about the only thing that offered a guarantee of profit for a set period for their works. Without copyright law, creators would be back in the situation of the pre-1700s, where they would be too busy working other jobs to make a living than to focus on creating new works, because they could not expect to make any profit from their creation.
This idea also essentially condemned all the people whose sole profession was creating. Although much public debate continues as to the “talents” and “value” of many popular authors and their best-sellers, the fact is that there are multitudes of professional writers worldwide, people whose livelihood comes from nothing but their written creations. Removal of copyright effectively would rob those people of all income, forcing them to get other, paying jobs and leave the writing to others. Expecting so many non-professionals to step up and fill this literary hole, when those people had other jobs to do to put food on the table, was completely unrealistic. The blow to written works—not just pop novels, tell-all books and ghosted semi-autobiographies, but textbooks, reference books, travelogues, histories, instructional books, etc, etc—would be horrendous, and for many people and cultures, it would be like stepping back 500 years.
This logic of discarding copyright altogether was as flawed as the “e-books have zero cost” argument, essentially throwing out the baby with the bathwater, and after a single “accident.” A more measured response would have been a demand to have the copyright system reviewed and reset to its original intent, to guarantee an exclusivity of profit for a set period, to encourage creation. And in fact, many e-book consumers debate this issue ad nauseam with each other; but alas, none of it is debated directly with the governments that will be involved in setting and enforcing the laws.
~
Some consumers decided they had the right to force their desires on greedy capitalist creators, and they set out to essentially hijack their creations and give them away. Using technologies and techniques that had already proven successful in the digital music wars of a few years’ previous, consumers would buy an e-book, or obtain it from someone who had bought it, and place it on another website; or provide a link from their computer to a peer-to-peer website, allowing others to download the e-book, free of charge. One of the most famous of these sites named itself Pirate Bay, an homage to the term “piracy” that had already been introduced to the Web lexicon in relation to digital music, and to the almost mythological romantic ideal of the freebooting, carefree oceanic bandits of centuries past.
It soon became known that any e-book author was subject to having their works posted on Pirate Bay, or some other less-well-known peer-to-peer or personal website, free for the taking. Many authors were directly singled out for such treatment, especially those who had publicly denounced e-books, charged too much for their e-books, made too much money in the eyes of some individual consumer, or just insulted the wrong pro-e-book group or individual. The Web had already seen this with Napster and MP3 music files, and those involved with literature did not want to see it happen again. However, the anarchists stuck to their arguments, however illogical, and used their unstoppable ability to hijack e-books as a means to force e-book creators to see things their way, accept the inevitability of free content, and like it.
The aforementioned situation and illogical arguments had one severe drawback: They did not admit to the creator’s right to profit from their creation. They amounted to the consumers’ slapping their beloved creators with one hand, and demanding their work with another; in a way, proving to be an greedy and unthinking as the Big Pub organizations they had railed against for years. The irony was not lost on creators, who were feeling increasingly less than enthused to create anything for such a greedy lot. The rift between creators and consumers became wider, and was soon to become deeper as well.
Many of the better creators were thus driven from the screaming anarchists, straight into the arms of… the waiting publishers, arms out, and muttering of protection in soothing voices. The publishers claimed to have a secret weapon against the anarchists, in the form of software that could be appended onto a book to prevent its being hijacked and given away for free. Generally called Digital Rights Management, or DRM, this software security method would theoretically force each consumer to pay for their copy of an e-book, and use an encryption system to open it, or they would not be able to read it. Many authors signed up for this protection, hoping it would give them the security that copyright did not seem to be able to provide.
Unfortunately, there turned out to be no form of software-based DRM that could not be cracked by a dedicated anarchist hacker. Many of them delighted in finding ways to crack DRM methods, and they would not only post their hacked e-books online for others, but they even shared their cracking programs freely online. And even before those most dedicated hackers did get their hands on the e-books, networks of readers began purchasing printed books, scanning them and using Optic Character Recognition (OCR) software to create unauthorized e-books and give away, circumventing DRM altogether.
It seemed there was no way to stop the Anarchists movements. Authors’ and publishers’ attempts to thwart them failed every time, and only escalated the conflict. And very few authors or publishers even wanted to attempt to placate or satisfy the smug, superior, totally unapologetic pirates. It seemed that the e-book movement would ultimately be doomed by the inability of both sides to come to any agreement; they could not even admit that they needed each other as much as they hated each other.
10: The consumer—Tear down this wall!
Throughout much of the hoopla over e-books, there was one faction that was feeling more and more left out of it all: The consumer. It was the consumer who wanted to read e-books. It was the consumer who owned the hardware and software that allowed them to read e-books. It was the consumer who asked the publishers for more digital material. It was the consumer who asked programmers for more features, to improve their reading experience. It was the consumer who offered suggestions as to how publishers could better serve them, and make them want to buy more e-books. Yet, at every turn, it seemed to be the consumer who was being ignored by hardware and software manufacturers, programmers and publishers. Suggestions fell upon deaf ears. What changes they saw seemed to actually be the opposite of what they wanted and had asked for. Increasingly they found themselves asking: What the heck kind of consumer-oriented business is this?
Consumers had responded positively to the first e-book creation tools, often provided for free by the e-book application programmers. Many consumers saw this as a tacit encouragement for them to start creating their own literary works, to become authors themselves, and they wasted no time in doing so. Fanfic was among the first works created, but soon budding authors were writing original material as well, new ideas with new characters, in every genre. Many authors showed a willingness to convert their lit into any and every format they could get their hands on, while others chose a select few, usually the most popular (though that claim was often highly subjective), and created their works in those few formats. Overall, it proved to be a successful empowerment of those writers.
But it didn’t take long for these budding authors to realize how little Big Pub thought of them for their efforts. They even saw evidence of Big Pub actively ignoring them, specifically because they had self-published in e-book formats. Big Pub had its finger pointed at them when it used words like “hack,” “amateur,” “fanboy” and “untalented,” and this was without even looking at their material. E-book authors were crushed, and this created a fomenting hatred of Big Pub, even as many of them knew the only way to expect Big Money from their books was to go through the Big Pub system. Soon, smaller publishers were picking and choosing from these budding authors, and proving to be much more honest and forthright than many of the Big Pubs in dealing with new talent.
Consumers also wanted certain features from their e-book reading applications; quite often, a feature that was present in another app they used to use, so it was now desired in the app they were presently using. But the programmers didn’t seem too driven to provide those changes—most of them were already neck-deep in the demands of the company’s department heads, who were working from their own agendas for improving the programs (usually along the lines of priorities that would increase security, or create customer lock-in). Other programmers who had originally created the e-book programs were now working on other projects, many of them after having become disillusioned with the evolution of the e-book market and moving on.
One of those desired features was the ability to read e-books from one format on another format’s application, or to be able to easily convert one format to another. After so many years of shuttling from one format to another, some consumers had entire libraries of old e-books that their new software could no longer read. They begged for ways to convert the old formats to newer formats, allowing them to continue to enjoy their purchases. This seemed to interest publishers not at all, as they saw an opportunity to force consumers to buy only the new e-books for which they would get their highest margins, and maybe even get some consumers to pay twice for the same book, albeit in a different format.
Finally, consumers were aware of so many older printed books that they wanted to see in digital formats, but which were not yet available, many of them in the public domain. Again, they begged publishers to provide these older books for them, and again, publishers ignored the request: They saw no value in making such an effort to provide books which were now available only on used bookstore shelves, or public domain books for which they essentially profited only from the cost of printing, when their real profit came from selling their latest books.
~
Feeling increasingly ignored, e-book consumers and writers felt obliged to take matters in their own hands. Groups organized to scan and transcribe their favorite works into e-book formats. One of the most famous, Project Gutenberg, began transcribing public domain works, which the publishers had all but written off as unproductive of their time (this, despite the fact that they could still sell them in bookstores, without having to pay copyright royalties to any party for them, making them pure-profit products).
Other groups, such as those who frequented the Pirate Bay, were not squeamish when it came to rights or copyright. Without permission, they transcribed copywritten works anyway, and joyfully made them available to others against the direct or indirect wishes of the authors or publishers. These groups were the backbone of the Anarchist movement, and they felt they had the moral right to take the actions they chose, partially because of the supposedly immoral way Big Pub (and the Capitalist world at-large) had treated them. This middle finger raised to the Big Pub machine served to fan the flames of discontent on both sides. Subsequent actions by Big Pub to develop tighter e-book security in the form of Digital Rights Management, or DRM (more on that later) seemed to suggest to some consumers that the publishers apparently considered all e-book consumers to be guilty of copyright infringement, and were seeking to deal with them en masse. (You may have noticed by now that there’s been enough questionable logic strewn throughout e-books’ history to choke a Vulcan.)
Other consumer groups began banding together in online discussion forums. These forums quickly moved beyond simple mutual support sites: Their members began circulating information, not only on how to create e-books, but how to find them, convert them to different formats, and crack e-book security measures; they organized members to support, or attack, whichever author or publisher raised their ire; they transcribed their own books, and those of public domain authors, into multiple formats; and they shared other sources of information or comments that supported their stance. Many of them centered on specific kinds of e-books or genres, while others operated based on a specific credo or point of view, whether it was the position of the writer, the consumer, the publisher, or the anarchist. And some forums were amalgamations of all of these groups.
One of the better-known and neutrally-balanced of these forums is the MobileRead forum, a truly international website that supports and encourages all matters e-book, whose members kept close tabs on the publishing world as it tried to develop an e-book market, and spared no effort to skewer any entity that did a bad job of it. MobileRead was a global cross-section of e-book enthusiasts, with members of every nationality, creed and credo imaginable, consumers and creators alike. It represents one of the best sites to visit to get the most well-rounded picture of the e-book world, from all perspectives.
Through websites like MobileRead and others, e-book consumers found e-book publishers, and through their discussion and patronage, encouraged them to succeed (some more than others, depending on whether or not their sales or security policies met with the website members’ approval). Individual authors (like myself) also visited these sites, and made an effort to promote their works through them.
Consumers proved to be merciless with many of the authors and publishers, holding them up to an ironclad standard of behavior and online presence—to become a well-known and respected member of the online community, not just a post-and-run bookseller—before endorsing their books. Consumers were demanding, in essence, the friendly neighborhood baker or grocer, and shunning the fast-talking shill, much like the customers in any neighborhood tend to do with a business newcomer. E-book consumers were forming coherent communities, and making sure visitors knew the rules before being accepted to the fold.
~
Being thus emboldened by a sense of community, e-book consumers began discussing their ideas surrounding e-books. One of the biggest subjects was e-books’ cost. With a large supply of free fanfic and public domain e-books available to anyone, consumers asked: “Why does any e-book need to have a cost?” And the corollary to this was: “Just what are we consumers buying, exactly?” Far from purely philosophical subjects, e-book consumers delved into them as if the very future depended on the outcome, and that the question needed to be settled now. As Chapter 4 described, consumers began applying their own logic to the subject, the most vocal of whom generally revolved around the very simple idea, when you boiled everything else away, that they should not have to pay for digital content. But despite not being able to effectively describe how the people who work to provide those e-books would be compensated for their effort, these groups remained steadfast to an impractical concept and an unlikely future.
These same groups debated the pricing structures from e-book publishing houses. They compared them directly to the costs of printed books, and they held publishers up to derision when it was claimed that the cost of e-books were directly tied to the cost of printed books. E-book consumers insisted that these costs should be a fraction of the printed product costs, down to practically nil, since the costs of printing and distribution did not exist with e-books. In fact, very little real marketing research had been done by the publishers, either to compute the real costs of e-book production (essentially, all of the pre-production steps, but without any printing, storage and physical distribution costs), or to establish their position in the marketplace next to printed books. Publishers, perhaps hoping that these lower-cost products could instead be priced similar to higher-production printed books, and therefore make more of a profit, set their prices accordingly; and immediately gave the public impression that they were trying to gouge the consumer. Discussions pro and con, often accompanied by self-published authors on both sides of the argument, did not help to answer the vital unanswered questions, the real numbers involved in e-book production… so the debate rages on. This has become one of the most vicious of discussions regarding e-books, perhaps second only to the discussions related to DRM.
Digital Rights Management was at least a more practical subject of discussion, but hardly less emotional. Some consumers railed at the very idea that some forms of DRM would prevent them from treating their e-books exactly as they had treated their printed books: You could not lend an e-book to a friend; you could not resell it at the used bookstore; you could not transfer an e-book from one device to another; and if the hardware device were damaged or destroyed, or the DRM code was lost, the book might never be available again, like it never existed; and et-cetera.
Other consumers and producers pointed out that e-books were in fact not printed books, and that consumers shouldn’t expect to handle them exactly as they would a printed book… to which nay-sayers cried: “What’s the point, then?” Consumers seemed torn over the very subject of exactly what an e-book was, and what should be expected of it, and of the entire industry. And there was enough waffling and stubbornness on both sides to guarantee no consensus would be reached anytime soon.
And other consumers stated that, until the question was answered to their satisfaction, they would simply act to circumvent DRM measures, making their e-books available to anyone. They did not specifically state that they would give their e-books away to anyone… but the implied potential (or threat, depending on your point of view) was there, and very often, it was pointedly not denied, either.
~
The publishers watched all of this, and they fretted over it daily. It seemed consumers were intent on tearing the publishing castle down around their own ears in their anger and haste. How can we placate these people? Given past efforts, and failures, should we even try? Or will all of this blow over soon enough, and prove to be just a blip on our long-term estimates? Publishers, seemingly frozen into inaction by the online arguing, fighting, stealing and hacking consumers, increasingly decided to stay out of the melee and opt for the “we hope it blows over” position. A few publishers made token efforts to extend the olive branch, only to find it frequently ignored, or even slapped away if some aspect of their business plan displeased the consumers (and something usually did). The consumers were confident they were in the driver’s seat, and seemingly unaware that they were driving the publishing industry right at a brick wall. Or, if they were aware, the consumers clearly thought their vehicle would simply break through the wall with minor scratches, and carry them into the utopian world beyond. They were unconcerned about the likelihood that Big Pub would not survive the collision, and dismissive of the possibility that the death of Big Pub would throw e-books, and every other kind of book besides, into a literary Dark Age of unknown severity.
All they wanted was to get through the wall, no matter the cost. As Big Pub tried desperately to steer them aside, consumers gleefully avoided their roadblocks and continued at the wall at full speed. There is no way to accurately determine the result of the inevitable collision, other than to say, the results will almost certainly not be pretty.
11: Luddites and fanatics—What’s wrong with just reading books?
The e-book concept may have been popular with some people who appreciated consumer electronics, were used to dealing with digital documents, and saw distinct advantages to the new electronic formats. But there were much larger groups of people who opposed e-books in favor of good-old printed books. It seemed there were myriad reasons for them to want things to stay the way they were, and those people wasted no time making their feelings felt.
~
Most vocal of the various groups were the Traditionalists. Traditionalists essentially saw no reason to replace printed books. After all, printed books had been around for centuries, and it was reasoned that anything that had been around for that long couldn’t be bad. They generally spoke of books reverently, romantically, and historically.
Traditionalists liked to talk about the idealistic ways to enjoy literature, with the implicit suggestion that e-books could not be enjoyed in similar fashions. An iconic scenario that was brought up in defense of the printed book was “reading in the bath”: A relaxing, romantic notion that, in fact, was very rarely enjoyed by the majority of readers. The frequent claim was that any e-book reading device gotten wet or dropped into the water would be ruined (as if a printed book dropped into the water would come out unscathed). It was a purely idealistic argument, but it actually had sway over some people who valued the ideal ways of reading a book… even if they themselves had never or rarely experienced it.
Other, more common arguments, centered around the physical and tactile differences between printed books and e-books. The most obvious of these was related to the electronic devices used to read the e-books: These varied from computers and laptops, to PDAs, dedicated devices (designed for nothing but reading e-books), netbooks, Blackberries and other electronic devices. These devices all had in common a display screen, and some sort of controls that scrolled the text, or moved from “page” to “page.” They were also made of some metal or plastic casing, of various sizes and weights. Traditionalists compared these artificial cases and control buttons with simple, leather-bound paper volumes, and the fact that the only thing you had to do to read a book was to turn a page. Dedicated devices were considered too complicated compared to the simple interaction of a printed book, all those buttons considered a distraction from the principle act of reading.
The screens themselves were also a major point of contention. Traditionalists insisted that an electronic display screen was vastly inferior to paper when it came to delivering text to the eye. Many traditionalists often complained about severe eyestrain resulting in reading some display screens. (To be fair, they were not alone: Many e-book enthusiasts similarly complained about eyestrain from reading on LCD display screens, for a time the most common display technology available.) They claimed that no display would be as good as paper for reading.
Finally, there was the contention that e-books used energy to operate, something a printed book did not need. Traditionalists defended the idea that printed books were more economical because no batteries were needed to use them, and therefore no drained batteries would end up in our landfills.
In fact, these were not so much condemnations of e-books, but of doing old things in new ways, and were very familiar to computer enthusiasts. Those who resisted modern technology and practices were generally thought of as “Luddites” by the computer-savvy, and looked at as backward, old-fashioned, country bumpkins, “simple folk.” Unfortunately, the computer-savvy had never been good at directly addressing the concerns of Traditionalists; they were much better at denigrating them with words like “Luddite” and dismissing their concerns. Without the evidence of someone attempting to address their positions, and computer fans turning their backs on them, the Traditionalists continued their mantra, and depended on sheer force of numbers to carry its weight. Because they were in fact the significant majority when it came to literature and reading, their opinions did become dominant, and kept many potential customers either hesitant or defiant of e-books.
~
Arguing against the Traditionalist point of view were the Progressives (of which I always counted myself a member). The Progressives were mostly, but not exclusively, among the computer-savvy; many of them simply saw an inherent value in e-books above their traditional counterparts. Many of these values were what drove them to e-books in the first place.
Unlike Traditionalists’ romantic bent about printed books, Progressives saw a practicality to e-books: To begin with, they felt that literature could be enjoyed in any form; they did not have to be ink printed on paper. For them, the printed volume was only a container… it was the words within that mattered. To that end, they considered e-book reading devices to be perfectly suitable containers for literature. They also discovered that getting used to pressing buttons to advance a page was no more challenging or distracting than flipping a piece of paper over—both practices became rote when totally engrossed in a book.
And more, the particular strength of e-book reading devices was that they were flexible in ways a printed book could not match: They could alter the display screen in ways that were easier on the eye; they could alter font type, size and color, also making it easier on the eyes; and they could make possible features like highlighting words and looking them up in a built-in dictionary, following links to web-based content, or jumping to footnotes and back again.
Progressives saw an economy to e-books, since it was considered by almost all e-book consumers that e-books should cost less than printed books. Though obviously all electronic devices cost money, a smart consumer could conceivably fill their reader with hundreds or thousands of books at lesser cost than their printed brethren (even free, in the case of many public domain works), and thereby save money on their book buying.
There was another economy at work—an ecological one. The recently-escalated concerns about global warming were bringing more people to realizations about how the manufacturing of certain products impacted the ecosystem. Paper production, never of major concern to consumers in the past, became a hot-button issue for many when it was discovered how inherently wasteful and polluting the process was, using copious amounts of fresh water, dozens of toxic and caustic chemicals (which were dumped back into the local water tables after use), and an incredible amount of electricity to generate the paper used in the world’s books and newspapers. Though an electronic device also used toxic chemicals and water in manufacture, a single electronic device could also hold (and therefore remove the need to print) hundreds to thousands of books, revealing an economy of scale that was an undeniable ecological advantage.
The Progressives trumpeted these e-book advantages with a fanatical devotion and zeal equal to the strength of the Traditionalists’ arguments. They generally dismissed many, though not all, of the arguments of the Traditionalists as hopelessly provincial, while the Traditionalists considered the Progressives to be lacking in their ability to truly appreciate a book. This mutual derision persists, as it does with many old pastimes that have transitioned (or tried to transition) to electronic versions.
~
Interestingly, an argument that might have aided the Progressives’ cause was rarely, if ever, brought up: Historical precedence. The technological age was replete of examples of newer technologies replacing older ones, despite passionate resistance of a significant portion of the population, and what was considered “common sense” at the time.
A good and fairly recent example, which should be personally familiar to most readers today, would be the typing keyboard: Prior to 1960, the typical keyboard was essentially part of a typewriter, a large, heavy, manually-operated machine that many considered state-of-the-art for putting words on paper. When electric typewriters came along in the 1960s, many veteran typists stated plainly and absolutely that people would “never get used to” the sleeker, flatter, solenoid-operated touch-keyboard; that the ergonomics were counter to human fingers; they would never catch on. But catch on they did, to the extent that by the 1970s, no one could imagine (and most could barely remember) using an old manual typewriter.
When home computers and portable computers came about in the 80s and 90s, keyboards shrank slightly. Typists immediately revolted, claiming the original size of the keyboard was optimized for human hands and the smaller keys would be too small to type on without striking multiple keys, and making more mistakes. But again, users got used to the smaller sizes, and soon, the older “full size” keyboards seemed like dinosaurs to them.
Then came the PDA, and its small on-screen keyboard, the accessory keyboard (often ¾ or less the size of a computer keyboard) or the transcriber pad (which allowed you to write a letter, and it would be transcribed to a typewritten letter, one at a time). Again, computer users insisted that people would never accept the tiny plug-in or “virtual” keyboards… but those who used the devices daily were soon typing as fast on the tiny devices as they were on the larger computer keyboards.
And today, devices like the Blackberry and various cellphones have keyboards that are literally only a few square centimeters in size. Even those who have gotten used to the small PDA keyboards often goggle at the sight of a Blackberry user typing with their thumbs at incredibly high speeds—a talented bunch that the Japanese refer to as “Oyayubizoku,” or “Clan of the thumbs.” If you’ve never seen such a thumb-typist in action, you should… it’s worth the price of admission. Go find a teenager, and they’ll be glad to show you.
The typing keyboard is only one example of the many technological developments or changes that had overcome popular resistance to become the dominant technology used: The train over the stagecoach (“Man cannot survive at speeds faster than thirty miles an hour” was an accepted “fact” before then); the automobile over the horse; the airplane over the train; the gas and electric light over the candle; the fountain pen over the quill; and many others. Clearly, if a person or group decided they wanted to use a device, they could almost always get used to the vagaries of said device to satisfy themselves. The corollary suggested that most of those who insisted that using a device was impossible, had simply decided that they did not want to try to adapt to it.
After years of witnessing and participating in such evolutionary acceptance of new and different technological developments, I coined the statement: “You get used to what you want to get used to.” Over the years, I have repeatedly seen new technologies that seemed at first almost impossible to adapt to… yet, those who were dedicated to those technologies developed new skills and habits, and in no time, made using it look deceptively easy. I count myself in this number, something I am reminded of every time I start writing novel chapters with a stylus on an on-screen PDA keyboard. And I am still amazed every time I watch a commuter on a moving train thumb-typing at typewriter speeds on a Blackberry! I have used the phrase so much, that I’ve taken to calling it “Jordan’s Theorem.”
But despite the clear evidence of history, Progressives largely ignored this particular argument, and emphasized the other points of economy and practicality that made up the backbone of their reasoning for e-books.
~
There were a few groups in-between the arguments, who took positions based on what was perceived as most advantageous for them; but as the statement suggests, perception may have had less to do with reality than with desire.
Authors, for instance, took varying views of e-books dependent on whether they considered them a boon or a threat. A few bestselling authors had tried to sell their books as e-books, only to discover that the majority of customers seemed unwilling to pay their requested price for the e-books. Stephen King famously conducted an experiment with a novel that he released in chapters, asking his customers to pay after downloading each chapter, the idea being that as long as he received a set amount of money for each chapter, he would keep writing. But mediocre reviews of the early work, coupled by the large cumulative amount that would have been required to buy all chapters (the equivalent of a hardback volume’s cost, and an expensive one at that), resulted in less and less of a profit each chapter (even though enough people were still downloading the chapters to potentially earn King enough for him to continue—but fewer and fewer were actually paying for the downloads). The book was halted in mid-run, and to this date has never been completed. King and other authors have little positive to say about e-books, which they see as an easy way for consumers to rip them off, and so favor the Traditionalist views that rely on printed matter (though even King has recently agreed to release his older and out-of-print books in e-book formats).
Other authors reasoned that e-books, not being “physical products,” were nonetheless suitable as promotional material for the “real” products, printed books. If people wanted free e-books, therefore, they could have them. A few authors swear by the idea that the more people read their books in whatever form, the more printed books they will sell to those people later. Author Cory Doctorow has said, “An author’s greatest threat is obscurity,” and he uses free e-books of his work to spread his name about and encourage sales of his printed books. Doctorow and authors like him are actually standing with feet in both camps, taking advantage of e-books, but with the overriding belief that printed books are still superior products, and the ones that earn them a living.
And there is a large quotient of authors who do not have a contract with any publisher, and so no printed versions of their works. For these authors, e-books are the only way they can publish and sell their words (vanity presses are an option, but they are hardly economically viable models). They pan the Traditional method, as it is so closely tied to the publishers that shun them, and embrace the Progressive method.
Editors, by nature, are opportunists… they do their work for others, so they generally go where the money is. Right now, the money for most editors is with the mostly Traditionalist publishers and the writers who have publisher’s contracts. That makes most editors Traditionalists by virtue of circumstance. A few editors have reached out to self-publishing writers, in the hopes of creating a new source of income outside of the traditional routes, but so far, the model of writers hiring their own editors has not largely taken off in the market. Until it does, expect editors to continue to support printed books, the Big Pub machines, and Traditionalist values.
The computer industry is most interested in the Progressive camp. They are already in the Traditionalist camp, because they provide computer services for publishers (yes, they really do use computers!) and retailers. But promoting the e-book industry would mean a new revenue source, and more money in their pockets. Unfortunately for them, the publishers and authors have turned to them to solve the security-related issues of selling digital files, something the industry is in no position to accomplish in any effective manner at the moment. But that doesn’t stop them from trying, and certainly making money off of R&D into Digital Rights Management systems (which they must know are not at all secure, given the present hardware/software environment). The computer industry is also very aware of the negative popular attitudes to any type of computer-based security systems, and do not want the negative publicity that would result in the improper rolling out of such a system. So they try to keep a low profile, and make sure the public has the understanding that they are only doing what they are told by their vendors to do (popularly known as “the Stormtrooper defense:” We’re only following orders).
~
Big Publishing kept a close eye on the discussions of Traditionalists and Progressives, who were, after all, their customers. However, they were well aware that converting their operations over to e-book production would cost them time, effort and money, none of which they wanted to spend. The economies that e-book proponents supported were not economies to them at all; rather, they were losses, smaller expected profits thanks to products of lesser value than printed books, and because of the expected losses through theft that would cost them further income. Plus the cost of rebuilding their infrastructures to accommodate a new business model and product. For Big Pub, there was no logic in investing huge amounts of money to achieve smaller profits.
Unfortunately for e-books, the Traditionalists happened to outnumber the Progressives by a significant amount, both in terms of sheer numbers, and in the number of books they bought (and the profit brought in by those sales). Given the choice, therefore, of satisfying the much larger Traditionalist audience who favored the status quo of printed books, or supporting the much smaller Progressives and spending the money to build a lower-profit industry, there was little question which choice publishers preferred.
So Big Pub supported the Traditionalists, knowing that the longer they were happy, the longer they could sell printed books, make their money, and not have to bother about e-books anytime soon. E-book Progressives knew this, and tried to reason with Big Pub, but they could not be swayed: The sales numbers were with the Traditionalists, which the publishers could present to shareholders to justify their avoidance of modernization. E-books were simply not profitable to them, so they continued to create products for the Traditionalists.
~
Along with print publishing, there was a sizable group of industries that likewise depended on the existence of printed books. The paper production industry was mentioned earlier, and at the time e-books were making their presence felt, the paper industry was already reeling from the losses they had suffered at the hands of the declining newspaper and magazine industries. As the Web flourished, people were getting more and more of their news online, and less in print. Many production plants had shut down over the previous decade, including some of the oldest paper-production lines in the industry. Production costs were rising, and the country’s environmental movement meant tougher regulations and more limited foresting stock. The remaining houses that still produced paper for literature knew they could not afford a serious cutback there as well.
So the paper producers supported Big Pub, and the drive to keep books on paper. Unfortunately, they had to do so quietly, as their extremely dirty production practices had recently been presented to the public, and they were already afraid of local backlashes against their operations that might result in a significant loss to their profit margins, or possibly outright shutdown. Efforts to claim better environmental stewardship on their part were mostly hollow and ignored by environmentalists, who had gotten very efficient at ratting out specious “green” claims.
There was also the transportation industry to be reckoned with. Paper had to be driven to printing plants; printed volumes had to be driven to warehouses; boxes of books had to be delivered to stores, or shipped directly to consumers. Truck transportation was a massive part of the printed book business, and they saw the potential losses that e-books could create. And, like the paper production industry, the trucking industry had environmental issues that they did not want discussed in public, namely, the amount of diesel soot pumped into the air by their trucks as they cris-crossed the country. So they also acted as silent supporters of Big Pub, allowing the publishers to maintain center-stage while they hung, like the paper production industry, in the shadows.
And finally, there were the places where the books were stored. Warehouses and bookstores generally stored books in spaces designed to keep them in optimal condition for sale. That meant spaces that were generally temperature- and humidity-controlled. As books were considered valuable property, they were watched by security systems, and energy was burned to keep them in lighted areas. Even sitting in one place, printed books seemed to use energy and cost money. No one was that seriously concerned that storefronts and warehouses were burning through fossil fuels or polluting the environment, so they could be a much more vocal part of Big Pub’s resistance to e-books, its insistence that a brink-and-mortar presence of physical merchandise were essential to the longetivity of the traditional business.
~
Throughout these Progressive vs Traditional debates, both sides have demonstrated little or no inclination to actually listen to the other parties. In some issues, there could have been effective and practical compromise; but an absolute refusal to sacrifice a dollar of profit, or an ounce of property rights (which will be covered in more detail in Chapters 14 and 15), meant one or both sides would refuse to give an inch, and effectively shot down any possibilities for a workable medium ground.
12: The accountants—How much is an electron?
Accounting related to digital files in general has been especially tricky, and e-books have been no exception. Accountants quickly discovered that electronic files presented their own unique problems when it came to establishing them on a ledger sheet.
Primarily, electronic files are not physical products in the traditional sense of the word. Once one is created, it can be replicated at virtually zero cost. That means that once the cost of creating the first file is recovered through sales, money made off of subsequent files is pure profit.
However, the problem with the ephemeral quality of electronic files is that they can be replicated so easily by anyone, meaning a customer can conceivably replicate his own purchased file and release those versions for free, thereby removing any future profit that might have been gained from selling those replications. Consumers are well aware of this, and some have made a habit out of exploiting that fact. The amount of loss from what is commonly called “pirated” electronic works is estimated to be significant… but because of the present nature of the web, the actual amount of loss cannot be accurately measured, leaving people on both sides to casually toss around high and low figures with no facts backing them up.
This has been a problem ever since electronic documents were devised, and to date, no foolproof method of protecting a document from unauthorized replication and theft has been devised. But a number of solutions have been tried, and it is clearly the hope of all digital file sellers that they will one day find the method that will work for all electronic files, including e-books.
~
In the meantime, accountants needed a model to apply to digital selling. The model had to do two things: It had to provide more of a return than was spent on creating the digital document; and it had to guarantee a maximum return from customers, based on the concept that lower prices meant more purchases, higher prices meant fewer purchases, and the highest possible return was somewhere at the top of a bell curve between those extremes.
Consumers were well aware of the inherent differences between physical and digital goods. Unfortunately, they were not nearly as well-versed in the production costs inherent in creating a piece of literature (due, in no small part, to the closed-door business practices of the publishers). Consumers also showed little or no sympathy for those costs, owing primarily to accounts, anecdotal and documented alike, of printed works that had suffered from inferior production, editing, or proofing. Customers took these accounts to indicate that the work being done by editors and publishers was often not worth the money being spent, and they were all in favor of cutting those costs out altogether.
This supposed zero-tolerance on the part of consumers made it very difficult to establish reasonable margins for e-book sales. Guided by knowledge about how physical products worked, their knowledge of production costs, and vague notions about the nature of digital sales on the web, the accountants began trying various scenarios to see if one would work.
The first scenario tried was simply to use the printed book model for e-books. Customers rejected this instantly and angrily, as they reasoned the costs of printing, shipping and storing the printed books were not present in e-books. Customers considered that paying those prices meant they were in essence paying for something they were not getting… or put another way, that publishers were cutting a major part of their production costs away, and greedily not sharing those cost savings with their customers. Since customers did not believe an e-book was equal in value to a physical product, they would not accept the printed book cost as value of the literature itself. In fact, many of them still insisted the e-book should be worth nothing, and many of them refused even to accept the notion that publishers deserved any profit from their labors at all.
On the other hand, businesses largely accepted the print prices for e-books. But it was easy for them to do so, because they could label the purchase as a tax write-off, making their actual payout zero (or, put another way, forcing consumers to pay for their e-books through our taxes… nifty, huh?). Even with business’ tacit acceptance of high e-book prices, however, the consumers were too large a market to ignore.
The web had experimented early on with advertising-based revenue to subsidize web content, and e-book publishers looked at that. Conceivably, if an advertiser subsidized the content, an e-book could essentially be given away to customers for a very low price, or even free. Television, radio and magazines were all examples of cases where ad revenue managed to replace part or all of the customer’s cost without undue angst on the part of the customers (for, while few people say they actually enjoy commercials, even fewer refuse to watch television because of them).
Again, customers unilaterally rejected the notion of ads in their e-books. Ironically, the majority of customers had not actually had experience with e-book ads, as few e-book sellers had ever actually included them in their books previously. But customers drew from memories of the worst of ad-based websites, and television stations that seemingly play five minutes of commercials for every ten minutes of content, and they assumed that any ads put into e-books would by default be the most distracting and annoying of content, repeated every few pages, probably animated, and impossible to ignore. (Although this would seem on its face to be an exaggeration on my part, in order to make a point, it is not: This reflects actual statements and fears of consumers, as they themselves have stated them in numerous forums and settings.) At some point, the possibilities of ads in e-books may be more thoroughly tested, but at the moment, alarmingly adverse customer reaction has kept this potentially good idea at bay.
Subscription models were proposed next. A few web sites, like Baen Books and Zinio, have demonstrated some success at subscription-based e-book groups. Subscriptions essentially replace high-profit-per-product revenues, which can vary widely with sales over time, with lower but steadier profits on periodic releases. This model could work with a publisher large enough to ensure a regular output of content for its subscribers, but smaller outfits could not necessarily guarantee that regular an output. And even the larger publishers did not like the idea of the lowered revenue per unit that would bring in, nor the accounting obstacle courses they’d need to run to provide varied compensation to various artists based on steady-stream incomes.
~
At a loss to develop a satisfactory sales model, some e-book proponents began looking at the question from a more philosophical point of view, considering e-books not from the standpoint of product, but from the idea of content. It was reasoned that printed books were simply bound volumes of paper and ink… the content was the words, the ideas presented therein. Similarly, it did not matter what electronic container the e-book came inside… it was the words, the ideas, that made up the meat of the product.
That meant a completely new model was required, one that was not based on how many pages a story would fill, but the amount of work that went into creating the story, editing it, proofing it, and preparing it for sale. But there were no precedents for a new system, and no support from within the industry to develop one. Despite calls from outside the industry to develop just such a system, it was believed by those in the publishing industry that it would ultimately be less profitable for all parties concerned; so all concerned parties passed on it.
Independently of them, however, many self-publishing e-book authors were developing their own scale of an e-book’s worth, and instead of waiting for the indecisive publishing industry to dictate to them, they started selling their e-books at the amounts they felt they were worth. These e-books went for anywhere from one to five dollars on average, a reasonable amount to the independent authors (since they were receiving all of the proceeds from each sale).
This upset the Big Pub machine, for they were positive they could not sell their e-books of established and famous authors at that price. Prices like that were too far at the low end of the bell curve they were used to working with… the printed books bell curve. Payments, royalties, etc, were already balanced for the high point of the curve—they would have to be severely rewritten to accommodate selling at any other point, which every instinct told them would not work anyway.
So when the first major publishers started selling their e-books, they priced them along the lines of printed book prices—along the bell curves they understood—while publicly suggesting that their level of quality control and vetting made their higher prices worthwhile. Consumers generally disagreed: They thought publishers were crazy to sell e-books at printed book prices, and sales were dismal at first. Big Pub merely held their results up as proof that e-books were ultimately unprofitable anyway, and stayed the course. And the accountants, largely being ignored by Big Pub, eventually shrugged their shoulders, accepted their situation, and sat back to wait for the train wreck.
As long as Big Pub had no interest in experimentation to establish some concrete sales figures, and therefore had no e-book related figures to make a decision from, they made no decisions. The longer they made no decisions, the longer they held onto print-based business models and held up their failures as justification for taking no further action… presumably, to avoid making things worse. It was a vicious circle of willing ignorance, and it would eventually take the actions of one of the newest giants of the publishing world to break the circle created by the oldest players.
13: Security—The DRM bogeyman
It is an accepted axiom in business that any object or product that can be bought can also be stolen. Stolen goods are one of the risks of most businesses, and considered as an operational loss. In order to be a successful business, your income must be more than your loss to make a profit. Therefore, the better a business can maximize income and minimize loss, the more profit it will make, and the more successful it will be.
This is the basic concept behind product security: Minimize loss through theft. There are very few businesses in the world that do not make some effort to minimize loss through theft, and very few businesses that take no effort to minimize loss through theft and manage to stay in business for long.
Security can be as elaborate as a set of cameras, recorders and motion detectors; or as simple as a watchful eye, a glass case, or a lock and key. Security does not even have to be perfect to be effective: It only has to keep losses down to an amount that doesn’t offset income. Every business decides for itself how much loss is considered acceptable, and how much money to spend on security before it results in diminishing returns, in other words, when security costs more than the income coming in.
This concept of product security is well understood worldwide, and was considered applicable to every business… until the software industry came along, and changed the very concept of products.
~
Physical products and software products had in common that they required an expenditure of money for initial production. But software products, unlike physical products, could be replicated after initial production with very little expenditure of cost or effort. Effectively, software products could be created for virtually nothing. Physical products’ cost were partially defined by the costs of producing and replicating them, and adjusted by supply and demand, and desired profit. With the software replication cost reduced to effectively zero, only the aspects of initial production, supply, demand and profit were left.
Unfortunately, consumers cared little for the demands of business, and they considered supply, demand and profit costs to be fully artificial—which they were—and they knew nothing of the costs of initial production, in terms of people or resources. Consumers also assumed a piece of software could be sold to enough people to earn back its production costs in no time… again, they gave little thought to a business’ intended profitability. This put businesses and consumers at odds related to software costs, and began the consumer concept that software should cost next to nothing to obtain.
Although most commercial software invariably had a cost involved with it, most customers thought it was completely acceptable to make copies of that software and give it away to friends, neighbors and complete strangers, for free. After all, it had cost nothing to replicate the programs for all those people. Moreover, most consumers thought nothing of accepting and using those software applications without paying for them, often reasoning to themselves that the software companies were charging too much for the software, so they were justified in taking a free copy if they could get it. During the 1990s, it was more common for computer users to obtain bootleg copies of their computer’s operating systems and most widely-used applications than it was for them to pay for them.
As time went by, and customers got used to the pursuit and use of free software, the idea that all software should be free (by virtue of its zero-cost to replicate) became a commonly-accepted guideline. The advent of the web, and a virtually-free worldwide network with which to disseminate all that software, served to reinforce this belief with users. Entire movements grew around the Utopian ideal of free content for all, with many predicting it was the inevitable future for Mankind to get all content for free, software being only the first of what would eventually be all products, software or physical, worldwide.
The companies and programmers who created the software, quite naturally, felt they were within their right to be paid for their work, the same as these worldwide users had the right to demand a wage for their daily jobs. But the companies were at a disadvantage, as their products were among the first in human history to be so easily replicated. They knew they would need to have a way to secure their products, in order to make their desired profits.
This reality led some companies to take action to secure their software, to prevent as much bootlegging as possible, and minimize loss. The first Digital Rights Management, or DRM, systems were born to secure their products. Many of these systems were as simple as encrypting a password into the software when purchased, so users would have to enter that password at least once, and sometimes every time they opened the software, in order to use it. Some systems tied software to a hardware “dongle,” usually a device that had to be plugged into the computer as the software was being used, thereby limiting the software to be used on only one computer at a time.
Many of these security systems were actually pretty effective, for a time. But the hardware dongle proved to be unworkable on some newer computers, rendering expensive software useless if the user upgraded their computer to a system that was incompatible with the dongle. And passwords could be passed on to others. Most importantly, the consumers who had come to accept the Utopian “free software” concept resented these security systems, and believed that DRM’s very existence implied they were being thought of as criminals by the software companies. They responded by purposely giving away passwords and copying applications, ironically proving themselves to be just as criminal as they believed they were perceived.
Some companies regarded the public dissatisfaction with security, and took the long view: They reasoned that it made more sense to give the software away, and allow users to spread it as far as they could; eventually, the software could become effectively ubiquitous in the market; then they would secure updated versions of the software, forcing any customers to pay for future versions. Microsoft and Adobe were two of the largest practitioners of this market saturation method, allowing customers to disseminate their most popular (and most expensive) programs, until those programs and operating systems were considered indispensible to customers. Then they instituted DRM systems on subsequent versions, and told the customers it was up to them to upgrade, or use something else. Customers, deciding it was easier to simply pay for new software than to start over with new software to replace it, grumbled aplenty, but they paid up. Today, Microsoft and Adobe make millions on their software applications, because they had gotten the customers hooked like junkies on free samples before they started forcing them to pay for their next fix.
~
The first e-books were largely offered for free to consumers. But as time went by, e-book writers and publishers decided that they should be able to profit off of their works, as any other artist or craftsman was so deserving. They soon began to charge for their e-books, at varying prices depending on whether they considered their e-books as equal products to printed books, or tried to assign a unique value to them based on the realities of the software market.
But after years of offering the e-books for free, charging anything for e-books did not sit well with many consumers. They fought the concept vehemently, and frequently rebelled against the writers and publishers by deliberately putting their purchased e-books online for others to download for free.
This inexorably led e-book writers and publishers to seek ways of securing their content, and they began to investigate the same DRM systems that had been used to secure software applications. Some of the first DRM systems for e-books involved establishing a code for the e-book reading software or hardware, and matching the e-book to that code, to insure only that one application or device could read the e-book. Other methods involved entering a code such as the customer’s ID information, or perhaps the credit card used to purchase the e-book, in order to unlock the e-book and allow it to be read.
Customers were not exactly thrilled to see security attached to e-books, but the first e-book readers were happy enough to have e-books to read. So most of them agreed to the security method imposed by whatever book they bought, thought no more about it, and enjoyed themselves with their e-books. Unfortunately, there were some hidden catches regarding the security methods, most of which could be attributed to a lack of foresight on the part of the DRM designers, which would turn about and bite the desire for security in the rear.
~
The first DRM-related problems began occurring when consumers upgraded their reading hardware, either to take advantage of newer technology, or because their old hardware had become lost or damaged. Just like the problems consumers faced with software when replacing their computers, many discovered that some security-burdened e-book applications would no longer play on new hardware. In many cases, new versions of the software could be downloaded, only to discover that the serial number of the old software application had to be entered into the new application for it to read the e-books registered to it. This was often a problem, as the old hardware and software wasn’t always available (if it had been lost, stolen or thrown away once the new hardware was obtained), and most users needed that old software to supply their registration numbers for the new software. And in some cases, simply entering the serial numbers wasn’t that straightforward, and stymied the efforts of consumers. Some DRM systems used the customers’ credit card numbers as their security code… not taking into account that, if those cards were replaced later, consumers might not keep the original number handy to open e-books a few years down the line. When this happened, entire collections of e-books were rendered so much electronic dross, unable to be played on the new hardware and software. Any e-books that had been paid for were similarly lost, and many vendors were not sympathetic to consumers’ requests for fresh copies of the e-books, or for refunds.
The next problems were caused by e-book sellers who were perhaps too hasty in entering the e-book market, or who did not do well once they were in business. After a few years, some of the many companies that had gone into business selling e-books found themselves unprofitable or struggling, and decided to cut their losses and close their doors. Some of these companies used security systems that required their direct interaction to fix any problems that arose, for instance, changing the serial number on a software application or a purchased e-book after files were corrupted or replaced. But when they closed, there was no one to provide that support to consumers.
Over time, most consumers would find themselves either replacing hardware or software, or needing support from now-defunct companies to continue to access their e-books. In most of these cases, consumers found themselves out of luck, the e-books they had paid for suddenly gone, without even a dust jacket to mark their passing. This caused a furor in consumer circles against any e-book that was secured by any DRM system, and especially those that were paid for, because it was assumed that sooner or later, some electronic glitch or an action by the company who sold the book would duly destroy what they had bought. This led to the impression voiced by many consumers that e-books with security were not “bought,” but “rented.” And the consumers felt that if that was the case, either the books’ “rental” prices ought to be lower, or there should be no DRM attached to them at all.
~
Some of the consumers who had felt like they had been “stung” by what they considered to be unscrupulous business practices (no one had told them in advance that their e-books could be so transitory) took out their frustrations by venting their wrath at the booksellers, by exploring the darker reaches of the web for free-to-download copies of their lost e-books that had been put online illegally by other consumers, or by posting their own e-books online for others to take for free. Those consumers who believed in Free-For-All web content openly supported their efforts, and clandestine websites sprang up to share the copywritten content freely.
During this time, amateur hackers began writing and disseminating software that was designed to “break” the DRM attached to many of the e-book programs. Once the DRM was removed, the e-book could be copied, converted to other formats, and read on any number of devices, present and future, by the owner. This was already the situation that consumers had gotten used to with music, and their ability to record the cut on an album, say, to cassette, and later to MP3, for private use as they saw fit. They saw no reason why e-books should be any different, so those consumers who were tech-savvy-enough to use the cracking software downloaded it and applied it to their books. And some of them turned around and put those e-books onto the illegal downloading and peer-to-peer sites, making them available to others… often as a direct and blatant defiance of those who placed the DRM software on the e-book in the first place.
The legality of the DRM-cracking software varied from country to country, according to their copyright laws: While the software was outright illegal in some places, it was legal to have, but not to use, in some countries… or legal to have, strictly speaking legal to use, but any subsequent cracked e-books would themselves be considered illegal, in other countries, etc. And in some places it was legal to own, legal to use, legal to keep cracked e-books, and even legal to disseminate them on peer-to-peer sites (more on this in Chapter 14).
Whatever the laws stated, it was clear that the governments were not of the same mind when it came to protecting e-book authors and publishers trying to make a living. Faced with a lack of government support, e-book publishers sought ways to make DRM more robust and harder to crack. Unfortunately for them, the basic architecture of the computer made it virtually impossible for them to accomplish their goal… as hackers were eager to prove. As fast as a new DRM system was created, a hacker would gleefully break it, and quickly disseminate his cracking software to the world. A war of DRM escalation had begun, with both sides seen as the aggressor by the other, apparently no chance that one side or the other would capitulate, and no end in sight.
~
As the DRM wars raged on, the public began to unilaterally debate the usefulness of having DRM at all: It seemed to only hinder a customer’s ability to obtain, to use, or to keep, e-books; and it was often as easy to crack as a padlock made of balsa wood. There seemed to be no point to using a security method that was not secure, and that actually discouraged customers.
The need for DRM was entwined with authors’ and publishers’ desire to profit from their work; unfortunately, the bulk of the public was still mostly unsympathetic to their desires. The majority of consumers were still vocal about their desire for free content, or they paid lip-service to paying for content but didn’t think twice about downloading any free content they found, however legal. Authors who tried to join into these conversations in their own defense, were often challenged by Anarchist consumers urging them to give up their creations for their web-based Utopia. Publishers were accused of being the worst of greedy Capitalists, and worse, and advised to stand aside and let progress roll… or, better yet, to stay right where they were so progress could roll right over them.
A few tried to conceptualize these Utopian worlds, and the ways in which authors would be compensated for their works. Most of these schemes involved tax-sponsored government doles, or a revival of the old patronage system that had largely fallen under the Capitalist system. None of their ideas, however, could manage to provide compensation for any but a few lucky authors, and at any rate, they would take government cooperation to establish these new systems. In short, they were pie-in-the-sky ideas that no one was even taking to the government to propose.
Some enthusiasts suggested a more benign form of DRM would be as successful as the existing security methods, but less obtrusive to customers: This “social DRM” would do no more than encrypt the purchasers’ name, and possibly other identifying information such as a social security or credit card number, into an otherwise-unsecured e-book; the idea being that a combination of civic responsibility, and an aversion to making their private information public, would keep e-book buyers from disseminating the works. Unfortunately, even these measures were capable of being removed or spoofed, and they might provide a further danger if unintentionally lost or stolen (or disseminated with someone else’s name and identifying information intentionally embedded within).
The Anarchists stated flatly that DRM was pointless because it was impossible to implement with 100% success, and they loved to point out that all it took was for one person to bootleg an e-book and put it on the web… even a minority of one would release a book to the Free-For-All world. Therefore, they reasoned, authors and publishers should simply accept their fate as producers for the masses, and give the e-books away.
This callous attitude—often expressed, ironically, by people who actually had their own paying jobs, and who would never consider working for free, but just as often by those who had not yet entered the job market and did not have a grasp on the fact that the world, like it or not, ran on money—was nothing short of a base insult to authors, and if anything, only encouraged them to seek out more secure methods of selling their wares.
This incredible economic and philosophic disconnect between authors/publishers and consumers served to drive a larger and larger wedge between them. Authors and publishers began to lose all confidence that e-books would ever be anything but a money drain on them, and consumers seemingly refused to budge on DRM, or even the need to pay for e-books. DRM was the 800-pound gorilla in the room, not totally under anyone’s control, not totally benevolent even when treated kindly, and simply by virtue of its imposing presence, hindering any hope of cooperation between the two sides.
14: Copyright—From here to eternity
Another problem that was making things difficult for e-books was the reality of copyright law. A centuries-old concept that had been formalized over the years, it was nonetheless unprepared to deal with the modern issues concerning electronic files.
There has always been a general understanding among humans (morally, at least) that the creator of something should be the first one to benefit in some way from it. The basic concept goes all the way back to the successful hunter getting the first meat of a kill. As civilization became more agrarian, then more metropolitan, the concept was extended to other forms of invention or creation that improved the group’s lot. This included the earliest writings, which were jealously guarded by the owners and dispensed as they saw fit, usually making them popular or powerful in the process. In those days, the possibility of a party copying another party’s work was so rare, thanks to limited literacy and a generally busier population with no time for sitting about and manually reproducing writings, as to make document reproduction easy to control.
With the invention of the Gutenberg press, the ability to replicate written works in volume became cheaper and easier. The concept of copyright was born out of economic necessity, as leaders needed better control over a populace that could more easily reproduce the works of another and thereby gain what was considered another man’s profit by established (moral) right. (This was often the power and profit base of the leaders themselves, many of whom had specifically come into power through the possession of written documents and the information they dispensed from within.)
As set forth in most countries (there are still differences from sovereign state to sovereign state), copyright law designates a period of time during which the registered creator of a product is given exclusive right to distribute and profit from that work. The intention was to encourage the creation of new works by guaranteeing the first of any profits made would go to the creator, as a monetary incentive to make the effort in the first place (as opposed to not taking the time to create things they would not profit from). For countries like the United States of America, fresh from a revolutionary split from its mother country and aching for new ways to develop their new world, encouraging creation (and invention) was a no-brainer, and it helped to establish the U.S. as one of the most innovative countries of the era, partly thanks to the protections given to its innovators.
The earliest copyright laws were written loosely and concerned small territories and individual countries. But by 1886, enough countries had adopted the formal concept of copyright law to necessitate the Berne Convention, establishing a common set of copyright rules for all countries to follow. It is still in effect to this day, and is endorsed by the overriding majority of countries in the world.
~
Although copyright laws are common fixtures in countries throughout the world, the social and political realities of those countries vary, and those realities influence or impact their copyright laws accordingly. Although this was an issue primarily for governments and a few major corporations not long ago, the digital era has opened up a global market, accessible by individuals as easily as conglomerates.
As individuals took to the web and began seeking and sharing digital files, the copyright laws of one country were sometimes at-odds with the wishes of a citizen of another country. To the individual, the only issue was whether or not another country’s laws applied to them personally… and in most cases, the conclusion reached, often based on a position of greed more than right, was a firm “No.” And with little likelihood of the authorities intervening within or across political boundaries, given the uncontrolled nature of the web, individuals exercised their ability to take what they wanted, and ignored copyright laws—even their own—in that pursuit.
The problem seemed to be relatively isolated at first, though an occasional publisher might get in touch with an individual with a “cease and desist” letter. But eventually, the copyright owners began speaking out in web forums, understandably questioning the acts of others who shared their works with others without due compensation to the artists. The result was a fracturing of opinion into three basic camps, and within those were nationalistic sub-camps, all of which had their own solutions.
One camp stated that the copyright that applied in an author’s home country should be enforced on his work, wherever it went in the world… in other words, an American receiving the work of a British author would be bound by British copyright law in his rights to use or redistribute the work. Needless to say, this was popular with creators, but unpopular with those who were already bound by their own country’s laws, and who saw no reason why they should adhere to another country’s laws for any reason. While some were willing to entertain copyright concepts that were essentially similar to those in their country, copyright concepts that were radically different from their own were summarily dismissed.
Another camp believed that the existing copyright laws needed an overriding international law that all other sovereign copyright laws had to adhere to as a base, sort of a more refined and all-inclusive version of the Berne Convention laws. Although this idea seemed logical, it fell under the same problem as the believers in the first camp; that is, each country felt their laws should be the overriding laws that every other country should follow, and the very different laws of another country would be unacceptable to them.
And finally, the third camp represented the free-for-all Anarchists, who insisted that copyright was useless when it came to digital files, and so it might as well be abolished. This group simply ignored the creators’ insistence that they deserved some fair compensation, and the consumers who insisted that a copyright solution could be found with some honest effort, and continued to argue for their Utopian ideal to start today. These three camps and their subcamps made up the ongoing, vocal, at times virulent, discussions about the future of copyright in a global digital economy.
And while these camps argued incessantly, there was another group standing outside of the camps: They were the lawyers representing the creators and publishers, and the law-makers, all of whom were patently ignoring the arguing camps while they did their usual business. The lawyers were glad to step up and do their jobs, when authors and publishers began adding DRM to their works, and pursuing individuals with the idea of suing them for damages. Beyond that, they were not interested in the details or morals of the situation. They were essentially spending their efforts on maintaining the status quo, and turning a blind eye to the real impact and ramifications of the digital era… easy to do, since it did not impact them overmuch, nor were their compatriots in other countries agitating for changes to copyright law.
~
As digital documents began to flourish and proliferate, those international copyright issues were becoming increasingly highlighted. Unfortunately, the relative ease of circumventing copyright laws, added to an ongoing debate as to the nature of digital files and their need for protection under copyright law, continued to throw a shadow over every discussion. The overlying question of every overt discussion about digital copyright was, “Why should we care?”
Naturally, the significant majority of those who discussed the matter were in the “Who cares?” camp; they represented the consumers, of which there are always significantly more of than creators in an industrial economy. Although their arguments were rarely practical, or even logical, they managed to overwhelm the more realistic arguments of the creators simply by sheer force of numbers. Perfectly reasonable explanations of why a creator deserved to be compensated for their work, and why copyright was an effective tool to provide those protections, were being constantly shouted down by the “Our needs are more important than yours” pundits. In fact, given the clear message being presented by the vast majority of consumers, it’s a wonder more creators didn’t get fed up and abandon writing for government jobs (where they could at least get a steady paycheck while feeling downtrodden and underappreciated). To listen to many of the older creators, it was clear that they hoped to reach comfortable retirement before the masses had their way.
Of those who managed to discuss copyright more reasonably, the actual terms of copyright were usually the center of discussion, specifically: How long did a creator have a right to profit off of their work before it was released to public domain? Early copyright law often chose time periods that accounted for a significant part of the life of the author (a period of time that has extended by a few decades since then). But some argued that an author did not need to control copyright for their entire lifetime, that they should not have an “unfair” advantage of being able to (potentially) cease working after creating one profitable work; and that a significant but limited number of years, often described as anywhere between ten and fifty, should be sufficient.
Others allowed so many years beyond the author’s death, to provide an income to the creator’s family or offspring. Although the question of how long an offspring deserved to profit off of their parent’s accumulated wealth was a subject of debate, the concept itself was considered by many to be reasonably sound. Most consumers seemed to agree, however, that once an offspring was an adult themselves, they no longer absolutely needed to profit from their parents’ works.
Again, the consumers and the creators argued these issues incessantly, while those who would actually make policy largely ignored their concerns and maintained the status quo. So a lot of words were spent, that had almost no impact on the issue at hand.
~
Much of the copyright debate was considered controversial because it was in direct opposition with the laws governing most physical property, for instance, a home or valuables. In those cases, no one questioned whether or not an adult would come to own his ancestral home (assuming it was owned outright by his parents), or the property therein, upon their parents’ death, and could continue to pass said property down through generations. The civilized world had developed chiefly around physical property rights, and those rights were considered moral, sensible and practical.
However, written ideas, documents, etc, were not considered to be the same as physical property: They were, by definition, “intellectual property,” or IP. They were not directly quantifiable the way a single object of art, or a house, could be; a single idea could be spread to, and used by, millions. Although the ramifications of this were duly recognized by the minds of the earlier ages, they already had a way to treat intellectual property as a form of physical property, since dissemination of that IP meant inscribing it to paper. Eventually the paper the idea was inscribed upon was considered a license to access that property, controlled by establishing who had the right to print and distribute more copies of the property. The copyright laws that they wrote reflected that reality, and were used to provide a means of control. Today’s printed book distribution system is essentially based on this concept.
The music industry had depended on the same principles of tying an item of IP (a song) to a physical product (a music book, a record, a CD). Up until the end of the twentieth century, this strategy had been very effective in controlling music rights, as illicit reproduction of copywritten music en masse was either difficult, or not widespread enough to be a significant financial loss to tolerate. But in the first decade of the twenty-first century, the development and proliferation of the MP3 file demonstrated that copyright law did not provide enough protection to prevent the replication and dissemination of copywritten works by those who did not own those rights.
The e-book industry found itself facing the same problem, and looking to the music industry for solutions. Unfortunately, the music industry hadn’t managed to find a solution to the copyright issue by this time either, so they were no help to the e-book industry. In fact, the only thing the music industry had managed to demonstrate was how to spend a criminal amount of money on lobbying and creating organizations designed to isolate individuals who had disseminated digital music, and sue them for a fraction of a percent of what had been spent to catch them.
~
While all the debate carried on, consumers were already pursuing e-books, and many found themselves torn between the desire to get digital entertainment for free, and to pay the content creators for it. The problem was, most of them knew they could find, with a minimum of searching, copies of books on illicit sites that they could download for free, and with no adverse legal impact on them whatsoever. The Anarchists had been busy over the years, and had amassed an impressive quantity of illegally-posted material that was just waiting for download.
This reality made it hard for even the most moral of consumers to resist… and in discussions and forums, they often referenced this fact and used it to pseudo-rationalize taking content they wanted. The popular tactic was to blame the person who illicitly put the work online as being fully at fault, while they themselves could not be blamed for taking advantage of the situation… sort of blaming the bull who broke the fence, or the farmer who didn’t come and fix it, for their being able to walk onto the farmer’s property and take an apple from their tree.
Most of those consumers also didn’t see the problem in making a copy of an e-book and sharing it with a friend, rationalizing that it was only one duplication (or a few, if it was to a few friends), and would do little real damage to a creator as far as loss of income was concerned. They pointedly ignored the possibility that their friends might further copy and redistribute the book, possibly to a website for open download, and therefore extend by any order of magnitude how far their “one duplication” could end up going. And as long as there was no way to reliably track illicit copies of an e-book, many of them took solace in the fact that a problem unquantified couldn’t be a serious problem at all…
~
It would be left up to only one faction to decide on copyright: The e-book sellers.
Faced with a public that seemed to have an unlimited capacity to rationalize not paying for books, and a government that seemed uninterested in hearing about it, many retailers resorted to implementing security measures to limit illicit copying and dissemination of works… DRM. Their goal, I’ll reiterate, was to limit… they knew that no DRM would be perfect. But they believed that the majority of the public would accept their DRM, as long as the overall process of buying and reading e-books was acceptable to them. And if the majority of the public bought their books and did not break existing copyright laws, the retailers could live with the loss caused by the rest of the consumers… the Anarchists and their uber-rationalizing brethren.
Amazon.com may be the best example of this strategy. Their Kindle store, a combination of DRM-secured content read on a company-controlled but easy-to-use reader, is specifically designed to limit the number of people that will break copyright, therefore mitigating loss to a reasonable level. In a way, Amazon is demonstrating the time-honored pre-21st century principle of using the Kindle as the “physical packaging” to control its intellectual property, thereby (and conveniently) not having to make any changes to existing copyright law. Other methods of DRM practiced by other sellers are designed with the same basic concept, tying the document to a physical container, in mind.
As a selling solution, it worked… but in terms of copyright law, it was a bandage on a broken bone. Its innate ineffectiveness was quickly brought to the fore when the subject of selling e-books internationally became a reality: Suddenly, companies like Amazon found that international copyright law did not allow them to sell overseas, even as they had sold physical books overseas before; and they found themselves cutting off previously-established markets from e-book selling, further angering foreign consumers who hoped, just like domestic consumers, for e-books for their collections. And of course, the DRM could still be cracked or circumvented.
The opposite end of the spectrum was the “open license” copyright, pioneered by many individual e-book writers. The open license was essentially an “agreement” between the author and the consumer that they had the right to do anything they wanted with the e-book they’d bought, except to replicate it to other parties without express permission of the author. Unfortunately, the license had no teeth… it was a gentleman’s agreement, not much different from social DRM, and with no legal strength to back it up. Though consumers thought this was great, there was no way to tell how many such works were being replicated anyway, and no physical or legal way to stop it. It was pretend-copyright, a way of acting like the problem was not there, as many authors wished it wasn’t.
And still, the lawmakers ignored the need to revise the laws. Copyright law remains trapped by pre-21st century concepts, unable to be reconciled with today's web-based global market until the laws are rewritten to meet the present and future realities of digital documents.
15: Music—No, we are the future
Many of e-books’ developmental foibles were unique unto itself. However, another electronic media format had experienced many of the same problems as e-books, and in some ways, served those who paid attention to those things as illustrating the pitfalls to avoid when trying to sell electronic formats to the public. This prescient media was, of course, digital music. And it’s worthwhile to review the history of digital music, the better to understand many of the problems faced by digital books.
Recorded music had enjoyed a long period of mostly-compatible format-sameness, in the form of the wax and vinyl disc, or record. The first discs actually came in multiple sizes, forcing player manufacturers to eventually make and sell record players that played at least 3 different speeds to accommodate the majority of available records. The 3 common speeds were later reduced to 2 (33RPM and 45RPM), where it stayed for the duration of record history.
But other forms of music recording and reproduction existed, specifically, magnetic tape on reels. Again, there were differing sizes of tape, and some “reel-to-reel” tape players were designed to play the multiple tape sizes, and this, too, lasted for decades. In the early sixties, a new way to package tape loops into sealed cases came into use, known as “8-track” (for the four sets of two-track outputs on each tape loop, requiring the players to switch from set to set once the end was reached, and allowing them to play continuously). It was a popular format, but it required a specially-designed playback device that could not play records or reel-to-reel tapes. Only a few years later, a smaller “cassette” of two-track tape was developed, requiring yet another unique hardware device to play. For a time, there were three distinctly different ways to buy and listen to music, all available at the same time, none of which were compatible with each other hardware-wise. Stereo manufacturers provided the bridge to all of these devices, making sure any of these music devices could play through the central receiver and out the same speakers.
After years of domination by tape and vinyl, music manufacturers began playing with new ways of recording music digitally, instead of by analog signal. In most cases, these new digital signals could be played on existing equipment. But some formats required specialized equipment to play properly, or at all. And while many consumers were trying to figure out the digital tape formats, the industry had discovered a new way to encode music onto laser-etched disks. Believing they had a better format than the old vinyl disk, because they could encode so much more music on the new “compact disks,” the industry started promoting CDs to the consumers.
The consumers were slow to adopt CDs, mainly because of a perception of sound quality that was inferior to vinyl (a perception that most people, in fact, were incapable of hearing), but also because of the requirement of yet another new device to play the CDs. The music industry dealt with that by ceasing production of vinyl, and releasing new music only on CDs. Eventually, even die-hard music fans were buying CDs and playing devices, simply because it was their only choice.
~
Enter, a few years later, the personal computer. As computers entered the home and people sought out more and more things to do with them, they realized they could use the computers to play, and later store, the music on CDs. However, CD music files were large, and took up a lot of storage space on the early computers. Music enthusiast programmers quickly took on the challenge to encapsulate music files into something more portable for computers.
A number of formats were developed, among them the format known as MP3. Though not necessarily the best of formats, it was very portable, easy to work with, and easy to encode from music formats. Some enthusiasts latched onto the MP3 format and started using it regularly, then exclusively. They also started sharing music files with their friends, in much the same way as they had dubbed and shared cassette tapes with each other over the years, now through e-mail attachments and computer disk swapping.
When the first websites were created, many of them took advantage of the new web technology to share any type of files on the owners’ computers. Users quickly realized their music collections, some of which were becoming sizable as computer memory capacity grew, would be popular offerings on the websites, even garnering them a measure of fame and respect over their collections. Without a second thought, they added their music to their web groups, visitors found they could download and store them, and without fanfare, the sharing began.
Previously, music companies had turned a blind eye to most music sharing by tape and cassette: Most sharing was time-intensive, so few people bothered to record more than one or two copies of music; the companies had decided that trying to police all of those technically illegal tapes weren’t worth the trouble or the cost, except in cases where more enterprising individuals tried to reproduce sizable volumes of music. Besides, they reasoned, the copies weren’t nearly as good as their professional recordings, and most enthusiasts would sooner or later buy the better-sounding copies. But digital music was changing the game: Not only was sound quality much better, reducing the desire to purchase professional recordings; but thanks to the websites, music files could be replicated infinitely, and distributed worldwide instantly. Suddenly, they couldn’t afford to turn a blind eye to technically illegal music, their bottom lines were already suffering because of it.
So the music industries went on a multi-pronged attack. First, they tried to convince computer manufacturers to build in copy-protection tools (music’s form of DRM), to prevent music files from being burned to new formats. But third-party tools were quickly hacked together for computers, and broke through this tactic. Next came manufacturing the CDs themselves to prevent copying… which was equally ineffective against third-party burning software. So, they went to the government regulators, and began an intensive lobbying campaign designed to give them broader authority to pursue and prosecute individuals who were caught replicating (or “burning”) and distributing music files. They based their arguments on existing copyright laws, taking advantage of the fact that they had not been updated sufficiently to deal with the new digital files.
The Digital Millennium Copyright Act was passed—twice—as a result of the record companies’ efforts, and backed by the motion picture industry, which cleverly foresaw the same problems happening to them as soon as electronic storage capacities would be sufficient to allow users to store whole movies on their computers.
Computer users, for the most part, were angry. For one thing, the music industry had previously given them carte blanche to copy records and tapes… why, all of a sudden, had they turned on consumers with the introduction of new formats? Consumers also felt that the music industry hadn’t been serving them that well of late, charging ever-increasing amounts for less and lower-quality music, and still somehow leaving their favorite musicians feeling underpaid and underappreciated. So consumers decided they owned nothing to the music industry, least of all customer loyalty. The music industry did little to nothing to change their attitude, being comfortable selling music on their terms, the same way they had for decades.
So some customers decided to take matters into their own hands. When music-dedicated websites were shut down over copyright violation notices, peer-to-peer (or “P2P”) software that connected computers over the web was applied, creating a new form of website, an aggregate link site that held no actual content on its own, but that provided connections to content on other people’s computers. This distributed the music sources—and the blame for sharing copywritten music—across hordes of computers instead of a single server, a worldwide network that would be impossible to completely crush.
~
Despite the aggressive attempts by Sony to market a new digital recording standard, consumers had already settled on the MP3format, and were rapidly developing it into a de-facto digital standard. Though the music industry resisted it, third parties and independent programmers embraced the format and began writing MP3 playing apps and burning software to assist creation of new MP3 files. Eventually Sony gave up on the idea of turning its digital standard into the mainstream choice, and began incorporating the publicly-adopted MP3 format into its own devices. Other companies, eager to enter a new market providing playing hardware to a growing market of MP3 listeners, quickly added MP3 to their short list of supported music formats. MP3 players proliferated quickly once a standard format was established, and they helped to push the remaining stragglers into line behind the MP3 crowd.
But although companies were now making money off of hardware, the majority of the actual music was being burned from personal collections, or shared and collected from P2P networks… in short, the music industry itself was not profiting from the MP3 boom. Efforts to scare consumers into refraining from copying and sharing music, via the first highly-publicized trials against file-sharers, only exacerbated the “us against them” mentality that had been fomenting between the music industry and consumers. For the first time, content “piracy” was becoming a mainstream issue, eclipsing the same concerns over software piracy that had filled the previous decade.
Further, independent “garage bands” and trend-setting professionals were finding ways to get their music out independently of the music industries, using MP3 files offered on private websites, and ably demonstrating that they offered viable alternatives to the music industries.
A rift was beginning to develop between the music corporations and the music enthusiasts, both sides feeling like the other side was not providing for their needs, and in fact, was acting directly against them. Artists were largely caught in the middle and feeling squeezed by both sides, and in the meantime, the peripheral industries that supported music (other than the MP3 hardware industry) were sinking into obsolescence and obscurity.
It took an outside player, Apple, to bring the factions together through its well-integrated combination of hardware, music store and acceptable pricing and security models. Though many disbelieved it would happen, iTunes became accepted on all sides of the issues (with the exception of the physical production industry). Thanks to Apple’s lead, the once-contentious digital music industry was moving ahead to a brighter future.
~
Of the major differences between the development of digital books and of digital music, probably the single largest is music’s relatively quick adoption of one overriding digital format standard. Standardizing onto MP3 allowed industry and consumer alike to concentrate on the tools and delivery mechanisms for digital music, as opposed to fighting over competing formats and their multiple delivery systems (as e-books have done since their inception). Even when industry and consumer disagreed on issues, they were all able to progress much quicker under a unifying format.
Both industries concentrated on devising special hardware for the playing of their media. The difference was, as e-books did not have a standardized format, hardware and software makers could not keep up with the multiple changing formats, nor provide the ideal customer experience or ergonomics, due to the extra effort involved in satisfying the multiple format issues. Digital book delivery systems were not well-integrated into the various dissimilar reading hardware, and hardware pricing was all over the map. Individual companies unwittingly wasted time supporting formats that would eventually disappear, leaving them with orphan products, and leaving consumers with orphan readers and e-books. A lot of ultimately needless frustration resulted in that lack of standardization early-on in the e-book development process.
And even if the publishers decided on a unifying format today (as many are certain the OEB, or ePub format, will eventually be), there still remains a sizable hardware and software infrastructure dedicated to other formats, and a huge potential loss of income if those other formats are abandoned now. The architects of the Tower of eBabel recognize the areas where they went wrong, but are unwilling to tear their work down and start over; rather, they hope it will somehow fix itself to their present design’s satisfaction, and not cost them as much grief as they clearly expect it will.
Both industries are similar in the way they applied DRM to their products. Neither industry found a way to justify their DRM decisions satisfactorily to the public, or even made much of an attempt other than claiming that an unknown and unquantifiable segment of their consumers were thieves, and that they had no choice in the matter. Moreover, their security methods were completely ineffectual, taking the teeth out of their bite and further holding themselves up for ridicule by their customers.
Both industries initially struggled with pricing of their digital products. But the music industry settled on a price range fairly quickly, one that was satisfactory to music customers and brought in a healthy profit for the industry. In contrast, e-book sellers price e-books everywhere from a single dollar to the equivalent of hardback prices, and customers cannot even agree on whether a single dollar is too much for an e-book.
The e-book industry looks to the music industry in the hope that they will avoid its pitfalls, and somehow be even more successful. But so far, the music industry has managed to do more right things than the e-book industry in the same situations, and there seems to be no indication that the lessons learned by e-music are actually being applied to e-books.
16: Apple—iTunes to the rescue
When Apple released the iPod, it was very late to the digital music party: A horde of MP3 players had been available for years by then, from literally dozens of worldwide companies, major and minor. There was no way to distinguish its playability or quality from any save the most pathetically-built players out there, including some by companies that had been major names in audio components for decades.
But Apple was already known for its innovative hardware and sophisticated, consumer-friendly software. Consumers believed that when Apple released a new product, it might be late to the market, but that was because more work had been put into making it better than everything else, and therefore it was worth the wait. In the case of the iPod, Apple did not disappoint.
The iPod featured a new interface unlike any others, a well-designed “scroll-wheel” that made content navigation quick and easy. They also developed a unique, Apple-like design, complemented by matching white earbuds, giving the iPod a designer look in a field of mostly utilitarian-looking MP3 players (and all-black earbuds). The music-playing software was comparable with the higher-end players; and when combined with the innovative and attractive design, and a frenetic and stylish marketing campaign, the iPods were runaway successes.
I remember the impact the iPod had on society: Formerly, MP3 players were secreted away in pockets, and black earbuds and cords were barely noticeable on passersby, except up-close; suddenly, people carried iPods in their hands (so as to use that cool scroll-wheel at any moment), and the white earbuds and cords could be seen half a block away, making them uber-visible in public. From a device that was at best unobtrusive, we literally jumped into a world of conspicuous music listening… from a largely unknown quantity to seemingly everywhere, almost overnight. I suspect it was akin to the impact on a largely black-clad early automobile industry made by the first brightly-painted production cars.
But Apple didn’t stop there. Though the iPod could play MP3 files, it was also designed to play Apple’s proprietary music file system. This feature wasn’t too evident to the first iPod users, because it was not designed to be part of the burn-and-play activities that the early users took to. But it would dovetail with a new service, one that Apple had intended from the start to be part of the iPod infrastructure.
When the iTunes digital music store was opened and promoted as a quick and easy way to get professionally-recorded music into your iPod, the music industry knew they finally had a way to get back into the developing digital music business. iTunes established a well-run digital music sales model, including an acceptable pricing structure for individual songs and albums. That was in-turn tied into an attractive and easily-operated software interface that automatically connected with the iPod hardware and allowed smooth interaction, a rare feat with computer software and peripheral hardware in those days. iPod users bought into the iTunes concept eagerly, were sold on its ease of use, and in no time, sales of digital music took off.
The iPod/iTunes combination became an unbeatable force in the digital music arena. Apple found the popularity of the iPod rose even faster than it had previous to iTunes, and as the technology improved and memory capacity grew in smaller and smaller forms, Apple was able to develop new versions of the iPod for more and more customers. Each iPod played the same files as previous models, and accessed them the same way, so the only real difference from one model to the other was the container itself (and its storage capacity). Apple was also able to authorize a series of accessories for the iPod lines, further enhancing their utility to consumers… and the device’s standardization made the manufacturing of those accessories exceedingly easy. In no time, non-iPod users were becoming jealous of the many gadgets that could plug into an iPod, and some were moving to iPods when it came time to replace their old devices.
~
At first, Apple struggled to win the cooperation of music publishers to include their music in the iTunes store. There was serious concern in the industry, inspired by the problems they’d had with peer-to-peer sites swapping published music, that their music would be bought by only a few and shared with many. This led the publishers to demand Digital Rights Management (DRM) be applied to their products. Apple agreed, though officially with protest: Their expectation was that the online music was so cheap and easy to get through the store that sharing would not be that serious an issue. In practice, the DRM was easily ignored by iPod users who did not share music, and as easily circumvented by those who did, leading most consumers to declare it a waste of time. Still, music sold better through the iTunes store than the music industry had expected, and Apple gained a modicum of respect and bargaining power thanks to that success.
Once Apple’s success had put it in the driver’s seat, it began pressing for a removal of DRM from the store. The music publishers continued to resist, but Apple had its own sales successes to point to as an indicator that they knew what they were talking about, what would make the customers happy, and how much more they would buy as a result. Eventually a few publishers agreed to try non-DRM’d music, with a slightly higher price (to offset what they expected would be more piracy compared to DRM’d sales).
Instead of the same or lesser sales that the music industry expected, music sales actually increased, not just with the non-DRM’d material but throughout iTunes. Apple has touted this as proof that lack of DRM does not negatively impact music sales, and has since pressured more music labels to lift the DRM restrictions on their products. So far, sales figures have indicated the lifting of DRM, and the subsequent pleasing of their customers, has been a runaway success.
~
Many e-book enthusiasts have pointed to the success of the iPod/iTunes model and asserted that e-books should be able to achieve the same successes under the same model. The most significant part of the Apple model, the lack of DRM that has demonstrated increased popularity and sales, is arguably the part of the model that most e-book fans would like to see emulated.
There are, however, enough significant differences between the e-book arena, and the iPod/iTunes model, to suggest that adoption of the Apple model may not be all that straightforward. Surly the most significant is the fact that there are more common e-book formats, and different kinds of hardware for e-book reading, most of which do not use directly compatible software… nothing like the digital music industry, which uses essentially the same formats and access methods on all hardware. (Though iTunes does not interface with any music players besides iPods, the only thing specifically stopping it from doing so is the DRM software.) A great deal of the e-book replicating and sharing activity is due to users’ need to convert those e-books to other formats, since many e-books do not come in every format. To replicate the iPod/iTunes model, a unified format and universally-compatible access method would have to be devised for all e-book readers; and to date, the hardware makers have not shown an interest in implementing such a universal setup.
The second major difference is in the way e-books and music are enjoyed: While listening to a specific piece of music is generally a brief experience, demanding a quick and easy interface to grab that music and go, literature is experienced over much longer periods of time. Moreover, an e-book is a product that lends itself to more extensive and elaborate pre-purchase examination (browsing through comments, reviews and excerpts, etc), and can be experienced in many different ways (reading straight through, checking references, making notations). This makes e-books less of an impulse purchase than a piece of music, and more of a deliberative process, thereby requiring a different level of marketing and references to be offered to the consumer before purchase. The iTunes store does provide a few marketing tools, but nothing as extensive as the tools e-books can use.
Thirdly is the average cost of e-books, which is also significantly different from that of music. Most individual pieces of music are sold for a price roughly equal to a US dollar, considered by most to be an insignificant, nigh-disposable amount suitable for an “impulse” purchase… they are like digital versions of the candy bars for sale at the market checkout. Most e-books sell for $5.00 to $10.00, and more, amounts that are considered more substantial and less “impulse”… a sit-down meal by comparison. This also tends to force consumers to put more time into reviewing the product before making a purchase.
A few e-book companies duly attempted to tie their format, or the dedicated reading devices they sold, to desktop software designed to tap into online sources of books, make purchases easy, and facilitate downloading of the purchased material to the device. Their models were similar to the iTunes model, at least in intent. However, none of the companies involved had significant access to a catalog of e-books that equaled the range of material that iTunes held over the music industry. And most sites’ content was not playable on all available readers, limiting the potential clientele to those who happened to have readers that could read that sites’ content. The industry was still too fractured to make a service like that workable for the majority of e-book users.
The iTunes model was designed to make impulse purchases of musical “snacks” quick and easy. E-books require more thought, deliberation, and more time to properly enjoy, and there is no unifying format and hardware design, making the actual iTunes model look less ideal for e-book purchases. But businesspeople and enthusiasts persisted: Was there any way to create an iPod/iTunes-like model that would successfully bring e-books, publishers and customers together? And if it could be done, would it be what the e-book industry needed to really get it started on the road to commercial success?
It would take one of the dot-com boom’s greatest success stories to finally address those questions, and to provide some answers.
17: Amazon.com—The game-changer
The ascendance of Amazon from just another dot-com to global powerhouse was a surprise to some, because initially it seemed to defy the established rules of dot-com growth and development. Whereas most dot-coms started out with a popular idea and an infusion of someone else’s capital, and either shot upward like a rocket or exploded violently upon launch, Amazon started with a simple, mundane idea—selling books—and plodded along with modest success, and a large red area on its balance sheets, for years.
Amazon was not playing the dot-com game. It was playing an older game, that of the traditional capitalist business: It worked steadily to build its market share, improve its infrastructure, learn its business, and grow little by little; until one day, the world woke up to discover that Amazon was suddenly in the black. It had become successful the old-fashioned way, by good old innovation, hard work and perseverance.
Amazon was a reseller, a middleman; it was not a publisher, and did not make its own products. Yet it provided an incredibly rich set of web-based tools, designed to make it easier for people to find the books they wanted. Optimizing those tools made the site the best place to go to find books of any kind, and it continued to grow and prosper. Soon it began expanding to other products besides books, and opening up its services to an international audience.
At some point, the likelihood that e-books would be a major part of book sales was impressed upon Amazon. It was already selling various models of e-book reading hardware through their store, and it was certainly aware of the growing volume of e-book content developing worldwide. It was also aware of the cry for an “iTunes for e-books” by businesses and consumers. However, it was also aware of the many formats, incompatible reading hardware, complaints about reader screen quality and security concerns that dominated the market at the time. Amazon decided that that was not a market it wanted to be a part of.
In order for the e-book marketplace to make sense to Amazon, there had to be more order to it. If there was more order, it would be possible to optimize and profit from the market. And Amazon realized it was one of a very few organizations in a position to establish any kind of order to e-books. So, like the Martians in H.G. Welles’ novel, its executives watched the industry, and began quietly developing their plans to take over the e-book world.
~
One of the most important decisions the company had to make was regarding the final form of the e-books delivered to the customer. They had three likely choices: Offering multiple formats created by other parties, as some e-book sellers did at that time; choosing and offering a single existing format from another party; or creating their own, proprietary format. Enough damage had already been done to the industry by creating new formats, and there would be a significant development curve involved, so that idea was out. The economics of offering a single format were much better than supplying multiple formats, but there was the concern that a format they did not own might change, and force expensive and extensive work to re-accommodate the changes. Amazon decided the best way to combat that was to identify an existing format, one that was technically “stable” and well-suited for literature delivery for some time to come… and buy it.
So Amazon bought the MobiPocket format. MobiPocket was one of the most stable of e-book formats, and because it was already capable of being read on almost any electronic device imaginable, it was already one of the most popular single e-book formats. This news led to some trepidation in the e-book world, as consumers hoped Amazon would not somehow prevent MobiPocket from continuing to offer the same quality e-books on almost any reading device. Perhaps Amazon hoped that MobiPocket users would swarm to them and become instant customers… but considering the track record of companies that had bought their way into the e-book industry previously, that was not likely to happen.
Although Amazon had a lot going for it—an extensive reach into the publishing world, impressive corporate power, and now a stable e-book format—they did not have the overwhelming support of publishers. In fact, publishers would have been just as happy to opt out of the whole e-book project right off the bat. However, Amazon had a lot of clout, mainly because they were one of the major international booksellers in the industry. They were able to convince the publishers to go with them as e-book sellers, and the publishers, concerned over the possibility of losing major markets by defying Amazon, tentatively agreed.
They did have a stipulation, though: They wanted their content protected from piracy. And they left it up to Amazon to work that little problem out.
Amazon’s solution was very similar to Apple’s iPod/iTunes model: An integrated store and hardware device combination that would allow Amazon some control over the delivery of content. For the hardware component, Amazon decided to provide the hardware: Amazon’s Kindle e-book reader, one of the first of the eInk-screened devices on the market, and with the instant-gratification ability to download new books wirelessly, provided a buying and reading experience that was very different from that of other reading devices. A major advantage to the device was its eInk screen, a new display technology that more closely simulated the experience of reading on paper than LCD displays, something that many consumers demanded in order to make e-book reading work for them. Consumer reaction to eInk was phenomenal, and suddenly, people who previously hadn’t considered the possibility of reading e-books were recommending to their friends that they check out the Kindle.
Though the Kindle device wasn’t perfect in its functionality, nor considered attractive in design, it had another thing going for it: The promotional strength of Amazon, powerful and influential enough to sell devices that couldn’t even be seen in a store. Between major press events, purchasing of newspaper and magazine space, arranging for articles and hands-on reviews by prominent people, and of course, getting the endorsement of popular television personality Oprah Winfrey in the U.S., Amazon jet-propelled the Kindle and Kindle store into the American consciousness. Within the first year, Amazon’s Kindle was selling incredibly well, and suddenly the country—not just the relatively few e-book enthusiasts, but the entire country—was beginning to seriously talk about e-books.
~
Amazon had another ace up its sleeve: Winning additional public support by making it possible to self-publish works into the Kindle store. The Desktop Publishing (DTP) self-service system was set up so that independent and potential authors could upload a manuscript with a few keystrokes, and find themselves (theoretically) competing against the latest bestseller in a day. The PR value of this move cannot be underestimated: It put a smiling face on the corporate giant, and drew a lot of attention from hopeful authors.
The big publishers were not thrilled about the prospect of amateur e-books selling next to their own. On the other hand, they had the money to pay for superior advertising campaigns, and were sure that amateur e-books couldn’t hold a candle to their own, popularity- and quality-wise. If anything, it encouraged them to step up the usual disinformation campaign against indie writers and publishers. Amazingly, even some indie publishers adding their works to the Kindle system acted as if the rest of the indie authors were indeed crap, though they themselves were still just waiting to be discovered… so powerful was the venerable Big Pub campaign, that even those authors on the outside, the “unwashed masses,” were sure that getting on the inside was the only way to legitimacy.
For most authors, it did not work. In fact, most publishers still took the attitude that those who had published independently were somehow “damaged goods,” and therefore would never publish them. A few publishers were willing to take on an indie author they believed in, if they thought it would make them enough money to be worth their while. But those publishers were few and far between, and the chances of an author being “discovered” that way were akin to playing the lottery.
~
Still, the splash made by the Kindle, the Kindle store and the DTP system made waves that reached clean across the country, and even overseas to countries that did not have access to the Kindle, but wanted it. Amazon had managed to do what no other hardware or software company to date had managed to do in the New World: In the space of a year, they had singlehandedly brought presence and legitimacy to e-books.
Other companies immediately took notice. Suddenly, the companies that manufactured the eInk screens for the Kindle and other dedicated reading devices were besieged with manufacturing orders, and other companies tried to quickly retool for manufacturing of e-book reading hardware. Pundits were predicting a wave of eInk readers, finally bringing e-books to the masses.
Publishers who had previously shunned e-books also took notice, mainly because Amazon was doing all of the work that they hadn’t figured out how to do for themselves. Because Amazon’s formula seemed to be successful, they allowed Amazon to dictate terms to them, even when those terms were rather harsh. But even if they weren’t thrilled with their profit margins, they could hardly argue with Amazon’s results. E-books suddenly seemed to work, and they inspired the inevitable attempts by other e-booksellers to copy their business model.
But, as pointed out earlier, Amazon’s system depended on artificially tying their e-books to a reading device, to mimic the “physical product” model that the industry was familiar with. And that model was a throwback to pre-21st century business models. Only Amazon sold the Kindle, and it was too early in the history of the Kindle store to find out what would happen if the public decided they didn’t like the Kindle, the store, or Amazon; how would people feel about getting rid of their Kindle, and losing out on all the books they’d purchased? Would Amazon continue to improve on the Kindle, or would they one day stop supporting it? What about reading books from other stores? Why wasn’t one format the same as another?
Amazon’s bandwagon was big, pretty, and fast. But in the end, it was still being driven just like the other bandwagons, and it stood a good chance of being driven over a cliff. The future—even of Amazon’s e-book venture—was uncertain.
18: The amateur authors—Gonna fly now
With the creation of the Kindle Store and DTP system, many professional and amateur authors found a new outlet for their works. The Kindle DTP was quickly inundated with self-published and non-published manuscripts from authors who hoped the new Kindle would introduce a realm of new consumers to their works.
Many brand new authors also flooded into the Kindle DTP, hoping that the new system would not only make it exceedingly easy to get their works published, but would succeed in creating their “big break” into writing success.
The phenomenon is familiar to anyone who has seen movies like “Rocky,” the story of an underdog fighter who serendipitously gets the chance to fight the champion, and proves himself as good as the pros through sheer effort and raw dedication. This is one of the greatest elements (some might say “myths”) of the American Dream, the idea that anyone can make it with enough honest, hard work—and the idea that position and advantage don’t necessarily guarantee success.
~
Many of these budding authors were ready, with finished and proofed manuscripts, and some of them had already gone through the self-publishing process. That usually meant they had paid for a “vanity publishing” and had printed copies of their books previously. The success of those books was all over the map, but to the authors, it mattered little how successful the printed books were: If they had been successful, it was a clear indication that there was a greater market out there, waiting for them; if they had not sold well, it was an indication of the regional limitations of self-published printing distribution, or of an audience not yet found. Either way, the Kindle store was seen as a way to beat the physical limitations that vanity publishing suffered.
Many of these accomplished authors had also gone through the e-book process independently—which, up until the Kindle store, generally meant using multiple software applications, cutting and pasting, manually adjusting layouts and manipulating covers, and outputting e-books in one or more formats; and then setting up websites, creating money-handling accounts, and creating their own online storefronts. Although some authors were willing to do all that, many of them felt that they should not have to go through the trouble of all that conversion… after all, to paraphrase a certain sci-fi TV doctor: “I’m a writer, not a programmer!” They were also frustrated with the many different e-book formats to choose from, and the varying opinions about which were the most popular, the most flexible, the most attractive, and especially how many of them should be considered mandatory to include versions of their e-books to ensure the most sales for the least amount of trouble.
To be sure, there were some writers who didn’t mind all of this non-writing work to sell their e-books. Those who felt comfortable going through the entire process likened themselves to well-rounded Jacks-of-all-Trades, and considered themselves that much more accomplished thanks to their wider skillsets. There was considerable doubt in the community, however, whether or not that wider skillset made them better authors.
Nonetheless, all of these authors agreed that the Kindle store would extend their market, hopefully to e-book enthusiasts who were eager to explore new e-book formats, and to print readers who were eager to try this new reading experience.
~
Of the many unpublished authors who quickly latched onto the Kindle store, many of them had originally sent their manuscripts to publishers and agents, only to be lost in the slush piles; or maybe they actually received form letters registering no interest in their work. Though some authors take this as a sign that they are, in fact, unpublishable, and eventually stop writing, most authors continue to work away at their craft, sure that some more polishing will do the job, or that they simply haven’t been discovered yet.
There were also many prospective authors who had never submitted their works to a publisher before. Some of those, before the Kindle store, had never seriously tried to get published—or even to write. Maybe they felt they had no real chance to break into writing, or maybe they simply expected the process to be more work than they could handle, the same reason why more established writers had not yet gone the e-book route.
The Kindle store was seen by many to solve both of their problems: It would get their work out there for people to finally see; and, once seen, they would surely become famous like Rocky Balboa and get snagged by some warm-hearted publisher interested in furthering their career.
So, many unpublished works came out, and many new works were created in record time, to be uploaded into the Kindle store. The Kindle tools made it easy enough to do: Anyone with a modern browser and a Word file could upload a document in minutes. These unpublished works joined the many previously-published works in the rapidly-growing Kindle store, amateurs and professional authors side-by-side selling their works; a chance to prove themselves against the bestselling authors of the world.
~
The influx of authors swelled the offerings of the Kindle store, something that Amazon was more than happy to see. However, Big Pub was less than enthused, because of the sudden “democratization” of books and authors in the Kindle Store: Suddenly, their best-selling and Nobel-winning authors were positioned next to housewives from Scranton and blue-collar autobiographers, in some cases with little distinguishing them from each other.
This prompted Big Pub to re-emphasize their subtle PR campaigns against non-published authors, promoting the advantages of being vetted by professionals. The publishers also took advantage of their advertising positions with Amazon, and made sure their products were being featured prominently in the Kindle store.
Amazon had other tools besides standard advertising techniques that Big Pub also knew how to take advantage of. Amazon’s data-tracking systems were capable of analyzing sales and providing information to the visitor such as purchases made by others who’d already bought the item being considered, similar products, complementary products, and comments about the products. Many of the amateur authors sought to take advantage of this by directly asking friends or readers for comments, or encouraging sales through the Kindle store, in order to get more “buzz” attached to their products. In many cases, this active “seeding” of comments and positive reviews was seen as a blatant attempt to manipulate Amazon’s system to their own ends.
However, the publishers had been taking advantage of these methods for years, through cross-promotion of their books by contracted authors, and by using their extensive networks to submit materials for comments and reviews. Though they used many of the same techniques that the amateurs were now using, somehow the publishers were not accused of “gaming the system” or taking advantage of their status. And as long as some amateur works still surfaced among their more heavily-popularized works, the publishers could avoid any semblance of dominating or controlling the market, and amateurs could still believe they had a shot at the American Dream.
~
It did not take long before a few amateur and unpublished authors surfaced within the Kindle store, whose works were every bit as good as those available through the publishers—and, many argued, some even better.
Some of these authors had not published before, and so had never had the benefit of the services a publisher has to offer, such as editing, proofing and layout professionals, helping to polish a work to perfection. Of these authors, many had proven themselves equal to second-party proofing and editing… many had, in fact, worked as editors or proofers in the past, and simply applied their already-established skills to the job. Others had taken advantage of professionals or friends to edit or proof their work, demonstrating an ability to follow the same steps as a professional publishing firm, but on an informal or contracting basis.
Some of these authors were those who had published before, but were not bound by contract to submit a new work to a publisher. Those authors often took similar steps as unpublished authors, either applying their own skillsets to proofing and editing a manuscript, or hiring/drafting pros or friends to help them with that task.
Many of the stories by these amateurs and unpublished authors were notable, either by the way they so effectively matched the established storylines of Big Pub novels, or by the ways they proved more inventive and unique, showing a writing style and substance that most Big Pub content seemed to be sorely lacking. Many of them were simply tapping into markets that Big Pub was sure had no demand or profit potential, and were therefore ignored.
But the quality of these writers presented a disturbing reality: They proved that the services offered by Big Pub were not necessarily required to be a Good Author. Though admittedly rare, it demonstrated that the major publishers were not indispensible.
The publishers had a way around this problem, of course: Signing those authors. By offering independent authors publishing contracts, the publishers could accomplish two things: One, they could profit from their popularity and skill; and two, they could divorce them from their indie status and sweep their once-independent past under the rug. Very few authors would turn down such an offer, of course, as it was not only a publicly-accepted de-facto statement that they had “made it,” but it all but guaranteed the author would make more money through the Big Pub machine than they could make on their own.
In this way, Big Pub could keep the perception of amateur writing quality low by removing any authors that threatened to upset the perception… and authors were more than happy to go along. Clearly there were a few Rocky Balboas in the crowd, but isolating them from their peers kept their status as indie successes quiet. For most of the rest of the amateur authors, they continued to struggle along, but received no recognition by Big Pub, and remained fully in its shadow.
19: The pro authors—Fight or flight
While the amateur authors fought for their share of the American Dream, another set of authors fought to retain their dream. These were the professional authors, those already published, who were watching over their shoulders as the e-book era was swiftly overtaking them.
I used the word overtaking, because most authors were not prepared for the changes that the e-book would demand of them. The e-book threatened to irrevocably alter the pricing structure of books and publishing, and therefore the paychecks they expected to receive for their books. There was the threat to intellectual property that e-books represented, the possibility of a work being easily copied and redistributed without permission or compensation. There was the clumsy publishers’ effort to secure those rights, and the incredibly virulent backlash from the consumers against DRM. And there was the evidence of the music industry, which seemed to indicate that Big Pub and its partners—the authors—would be under direct siege by the hordes of “talentless amateurs” that would overwhelm them and ruin the industry.
It’s no wonder, then, that most established authors went along with the opinions of Big Pub, and sided with them in the effort to hold back the e-book industry as long as possible. Big Pub was sure that e-books would ruin them, and authors saw no evidence that the pubs knew what to do about it; so they shared in the publishers’ panic, and ran with them like baby deer following their mothers through the forests.
A few authors have managed to distinguish themselves in the e-book battle, on both sides. J.K. Rowling has pointedly refused to allow her popular book series to be sold as e-books, in a bid to avoid the possibility of her works being reproduced and disseminated without her permission (and without her being paid). She is certainly not the only author to take this position. However, as her books are oriented towards young adults and children—a group that is more aware of and interested in e-books than most older adults—the decision is considered more of a slight to them. Further, many of those younger consumers have reacted to her decision by manually transcribing her books to electronic format, in a blatant and pointed disregard for the author’s wishes, and distributing them online… exactly the outcome she hoped to avoid by shunning e-books in the first place. This ironic turn of events, a segment of a popular author’s fans turning on her and intentionally bootlegging her work, has already become e-book legend.
Harlan Ellison has also become famous for his negative reaction to e-books—ironic in itself, considering his reputation as a sci-fi author with a better understanding of the impact of technological development than most. Ellison has gone out of his way to prosecute any individual caught disseminating digital versions of his works, with a fervor generally exhibited by NRA members and anti-abortionists. Many other authors view the direct threat of litigation over a copyright-protected work to be the best weapon against illegal copying and dissemination, despite the demonstrated lack of effectiveness, ridicule and damage to reputation that various bands suffered through during the days of music-sharing through Napster… possibly authors hope the other aspect of the Napster era, the raising in the public eye of bands like Metallica during their Napster crusade, will similarly improve their public exposure and garner more sales. At the very least, authors like Ellison surely hope they will not be forced out of retirement by e-book-ravaged royalty checks and forced to make a new living.
~
Other authors have taken the attitude: “If you can’t beat ‘em, join ‘em.” These authors, having seen the writing on the wall, decided to find a way to turn the e-book phenomenon to their advantage. Authors like Cory Doctorow have taken the tack that e-books represent effective ways of advertising their profitable printed books: They literally give their e-books away for free, while urging those who read them to go out and buy the printed book if they liked the e-book. They assume that some people will not buy a book they’ve already read, but some will, and that they are more likely to buy other books of theirs, as they are now familiar to them. Doctorow has said, “The greatest threat to an author is obscurity,” so he uses e-books as a “loss-leader” to expose his work to more people and build sales of other books. Other authors, using the loss-leader approach, release some of their secondary material as free e-books, and leave their primary books in print-only formats in order to allow the free e-books to bring in customers and recover sales from the printed books.
A few authors experimented with e-books, but with limited success in their endeavors. Stephen King attempted to release a novel, a chapter at a time, as an e-book, and asked only that his fans would “donate” to his website for each chapter, the intention being that if he received a set amount of money, he would continue to write the next chapter. But when chapter donations dropped below the level set by King, he stopped writing, infuriating those customers who had duly paid for the earlier chapters, and now would receive no e-book at all. The bad taste left in the mouths of King and his customers has prompted King to avoid e-books, or to position them as undesirable products (usually through higher-than-standard prices), in order to discourage e-book sales and encourage continued print sales. A few other authors have been singularly unimpressed with their first attempts at e-book sales, and have since pulled out of e-book distribution either significantly or completely.
There are established writers who have authorized the release of their primary works as e-books, through some or many of the e-book distributors available online. Of these authors, their works come in two flavors: One is offered for a price, free of any type of security, with the hope that this positive presentation of their e-books will be enough to mitigate a significant amount of loss through copying and bootlegging; and the other flavor locks down the e-book with DRM, in order to prevent its copying and bootlegging after the purchase. The jury is still out on the effectiveness of DRM used in this way, since to date no DRM method has proven unbreakable, leaving many of its opponents to declare it worthless. Its proponents, however, state that DRM only has to deter enough potential scofflaws to mitigate loss through bootlegging to an acceptable level, so they are satisfied that it is doing its job.
And then there are authors who have chosen to offer e-books in a way calculated to garner the greatest appreciation from customers, while still providing some income: E-books sold at prices the consumers consider very fair (generally below $5), in multiple formats, and without DRM. Most of these authors hope to create a following for their work that could conceivably match, or even transcend, the market possibilities of their printed counterparts; or to breathe new life into their older works, which have gone out of print circulation and therefore represent potentially recoverable income.
~
While all of these authors were open to take different approaches to e-books, all of them were aware that they were now competing directly with amateur authors, the kind of people that they previously stood well-above, looking down upon them from the parapets of the publishing “Castle.” The development of e-books, unfortunately for them, had provided a way for the amateurs to scale the castle walls and stand eye-to-eye with those inside.
Some of them were magnanimous enough to be willing to share the spotlight with the amateurs, being sure that their work would outshine the amateurs’ work, and duly send the amateurs home in ignominy. Many of those authors had been surprised to discover that there was often very little difference between the sheen on themselves, and the sheen on the amateurs. This has led some authors to question the value of the services they gained from the publishers, other than the obvious established distribution network. As the distribution element alone is worthwhile to them, they are content to bask in Big Pub’s protection and promotion over the amateurs. But they have increasingly found themselves forced to defend their position against the amateur newcomers, bringing to mind again the champion Apollo Creed discovering that his opponent, Rocky Balboa, was more of a handful than he could have imagined.
Other authors insisted on standing apart from the amateurs, basically not giving them a chance for direct comparison. These established authors remained steadfast in their assertion of their superiority, by virtue of having “paid their dues” and landed the prestigious publishing contracts, unlike the amateurs petitioning for recognition in the new, democratized e-book world.
~
But as more retailers like Amazon’s Kindle or Barnes & Noble’s e-book stores proliferate, more and more established authors will realize they are no longer standing so far apart from the amateur or self-published authors, and the services they had previously received from publishers may not be enough to protect them against the industry’s newcomers.
Some of those established authors have indicated that, the moment that day comes, they will retire from the publishing industry, either directly or subtly suggesting that the industry will be forever ruined by the e-book era and therefore not worth their time or effort. Many of these authors have fallen on the more traditional complaints about the e-book industry, the suggestion that removing the aspects of printed matter from literature will irrevocably damage the romance, the tradition, the value, and the very soul, of that literature. Others simply make it clear that they feel the new authors are not up to their ideals, and will never be so without the traditional publishing machine to groom them into “professionals.” Few of them seem to wish to publicly voice the obvious: That they may no longer be able to make the living they are accustomed to in the e-book industry.
A few authors have even suggested that the demise of the printed book was a harbinger of the demise of civilization itself. Poet Alan Kaufman once suggested that the transition from print to e-books would be akin to the book-burnings of World War II, and that e-books’ fans were the equivalent of Nazis.
These and other similarly-impassioned claims seem to indicate a distinct and, in some cases, pathological fear of the uncertain future of the literature market, and a reflexive establishment of a death-grip on the traditional publishing methods. It mirrors the actions and attitudes of the majority of established publishers, which makes sense, as many of these authors understand that their professional and economic futures are strongly linked to their publishing houses and established relationships, and how well they manage to deal with change. Those authors who have decided not to allow publishers’ inaction (or mis-action) to drag them down, are taking charge of their future by trying new publishing methods.
As some of these methods succeed and some fail, we can expect to eventually see some coherent workable business methods rise to the fore, and much of the guesswork of the early business environment should fade away. This will create a calming effect on established and professional authors, and decisions will be made by individuals on whether they plan to adopt the new methods and stay in the book business, or opt-out and either retire or seek a change in profession. But for now, everything is still in flux, and the pro authors, like the Big Publishers, are still rushing about, looking for direction.
20: The environment—The green movement vs. good ol’ paper
Since awareness of the need to deal with pollution began to develop in the 1960s, the global marketplace has had a tenuous relationship with pollution and the environment: Although the world’s industrial nature was the major culprit of pollution, it was also the engine that allowed growth and progress in the world. As much as people wanted clean air, they also wanted jobs, homes, food and money, and it was difficult to curtail industry and maintain the perks of modern life.
Many industries came under close public scrutiny, usually because of their environmental impact with either prominent areas or prominent people, and as governments initiated pollution control laws, these industries would come under heavy regulation. Others, keeping a low profile (and spreading around enough lobbying money to keep it that way), managed to avoid serious regulation.
However, with the deterioration of the ozone layer and the threat of global warming becoming more apparent at the beginning of the 21st century, many people began to realize that the stakes were much higher than they realized. Scientists began looking past individual organizations, and examining the amount of carbon being pumped into the atmosphere, from any and all sources. Suddenly, anything that produced excess carbon, or prevented the planet from naturally capturing and sequestering carbon as it had for millennia, came under direct scrutiny, and few industries or technologies were not found wanting.
It was in this climate (ahem) that the green movement stepped into the debate between printed books and e-books, and sought to determine which, if either, form of literature delivery was best for the planet.
~
The attention lavished upon it by the green movement did not thrill the paper industry. They’d already had their fill of public attention, and still sported the scars.
At the beginning of the environmental movement, the paper industry had been stung by its association with the logging industry, which was one of the first industries to be savaged by the environmentalists. Already established as one of the premier industries of the American northwest, logging came under heavy criticism for its then-common practices of clear-cutting forests, driving local wildlife away or into extinction, and tearing up the local ecosystems in harvesting trees primarily for lumber and paper use. Logging, long considered a noble and honorable profession, increasingly came to be regarded by many groups as akin to the rape of the natural world. Governments stepped in and forced new regulations on the logging industry, forcing them to examine ecosystems before charging in, to mitigate ecological damage while working, and to restore forested areas that had been harvested.
These forced activities helped mitigate damage to forested areas, but it was also a significant expense, and the logging industry quickly passed those costs onto the paper and lumber industries. Soon paper products jumped in price across the board, because the paper mills were being charged much more for their stock than in the past. This price jump created a ripple effect that touched off the systematic decimation of the press-printing industry, as the cost of large press runs could not compete against the smaller targeted runs of the new technology, computer printers, networked photocopiers, and electronic files. As general printing began to dwindle, paper producers found their client base shrinking drastically. Their remaining major markets became the office paper markets, and paper for newsprint and books… still substantial, but a shadow of their former markets.
I had a front-row seat to the decline of the printing industry, as the practices of my profession (running digital printing systems and converting documents to digital files) was instrumental in taking business away from press printers. As I worked away, printing companies downsized, then shuttered their doors, or they were forced to convert to digital printers like the Xerox DocuTech, and much smaller runs than they had enjoyed previously. Within a few years, it became obvious to me that the press printing industry was on a powerslide from major industry to boutique service, an eventual victim of modernization.
Perhaps due to the hard times faced by the paper industry, there was little thought given to the ecological impacts that standard paper production had on the environment. Paper production was a lot more than chopping down a tree; it typically involved rending, pulping, re-congealing, bleaching, cutting and packaging of wood products into thin, flat sheets. These processes involved heavy transporting and milling machinery, which required significant amounts of power to move and work with heavy products like lumber, and the requisite lubricants and materials required to keep that machinery running. The pulping, congealing and bleaching involved up to 40 caustic and toxic chemicals and bleaches, plus huge amounts of fresh water. When these chemicals and water were used, they were limited in recyclability, and generally useless after one or a few cycles. Then these waste chemicals and water would simply be dumped into the waste stream, which might be a local stream or lake, and ending up in the local watershed. In short, there was nothing particularly clean, nor easy on the environment, about the printing industry.
Supplementary to the paper mills was the need to ship their products from place to place. This was also hardly a clean process: Although some stock could be transported by train, a relatively efficient mode of transportation, at some point all of it had to be transported by diesel-burning semis and trucks on the nation’s highways and in the cities. The trucking industry was likewise under scrutiny for the amount of pollution caused by old, inefficient truck fleets and inefficient daily practices (like idling overnight and during rest stops), and under pressure to clean up their act.
Next, major printing industries, the ones that did the heavy newspaper and book production, used up more electricity and required oil-based lubricants for its printing machinery. Those books were printed using inks which often included toxic trace minerals and metals. Once their stocks were printed, they were generally transported in smaller collections, which invariably meant more inefficient truck transportation. Then, those printed products had to be stored somewhere, generally in warehouses. Because paper is highly susceptible to environmental conditions, the warehouses generally had to be climate-controlled, requiring more energy usage to maintain them. And finally, those products had to be moved to local bookstores, using more power for lights and climate control, until they were bought by consumers and finally removed from the industrial energy- and pollution-stream.
There was no way to put a clean face on this overall process, and the printing industry knew it. So they made every effort to remain quiet, to avoid entanglements with environmentalists, and to quietly lobby to protect themselves and their interests. Today, the bulk of the public has only a general idea of the whole of the process of delivering a book to their hands, not the real environmental impact of that process.
~
The electronics industry was, if anything, a bit more used to public scrutiny, as it had developed largely in conjunction with the environmental movement, and had had “greenies” looking over their shoulders from day one.
Certainly the electronics industry was no environmental saint. Electronics processing involves various forms of oil-based plastics, as well as various precious and toxic metals, some of which are mined in very dirty processes. Copious amounts of electricity are involved with electronics production, in their case, not to shift heavy machinery, but to orchestrate the myriad finely-detailed processes of operating precision machinery, lasers, calibration and testing equipment, and microscopic monitors required by fine manufacturing. The industry also uses water and chemicals in many of the formation and polishing processes. Electronics manufacturers are noticeably better at recovering and recycling materials for re-use, again thanks to being monitored by environmental organizations from the beginning. But they still consume a great deal of energy and a variety of materials in the process of turning out their electronic devices. And they also require transportation, and its requisite environmental costs, to get the devices to the hands of consumers.
The largest concern with electronics is not with its manufacturing, in fact… it is with its recycling. Although, as described, electronics manufacturers take care to be clean producers, the industry does not have a well-established recycling method. As electronics are discarded, many of them are simply landfilled, causing their plastics and toxic metals to end up in the watershed. In some cases, landfilled electronics are manually broken down by individuals seeking to recover their precious metals, but in the process they are exposed to the toxic metals alongside them, putting their health at severe risk. Governments and institutions are working to establish methods to safely reclaim and recycle those materials, but the methods are not yet in place in most regions and countries. The majority of electronics are still being landfilled, and polluting our planet after use.
And there is the fact that e-book reading devices must consume energy to be used. Though the latest devices are very miserly, they still require power to be added to them at intervals, contrasted against printed books, which require no power just to read.
~
Given these two realities, it has been difficult to establish a clear winner and loser in the battle for cleaner book delivery. Both sides are experienced in pointing out the shortfalls of the other to claim dominance, and with heavy lobbying on both sides, the printing and electronics industries have remained at an environmental stalemate.
There is, however, one more consideration to make that can shift this argument: The fact that a single e-book reading device, while possibly polluting the environment as much as one (or even, depending on the figures used, a number of) equivalent printed books, can in fact store in its memory hundreds, even thousands, of books. In an environmental comparison, there seems to be a clear environmental winner between a single reading device, and hundreds or thousands of printed books: The e-book reader has a distinctly smaller environmental footprint than 100-1000 books. Further, the e-book’s footprint can still be improved by establishing better after-use recycling methods, and designing for less energy-usage. The methods have already been demonstrated effective, it is just a matter of putting them to use.
The printed book can undergo better recycling efforts—unknown to most of the public, presently only about 5% of printed paper is recycled—but the reality of the material is that it can only be recycled 2-3 times before it is useless as a paper product (and not good for much else). And despite the suggestions of the logging industry to the contrary, the trees they replant after harvesting are not filling the wilds as fast as they are being harvested… we are still losing more of the equivalent volume of trees than we are gaining, with their ability to interact in the environment, sequester carbon and mitigate forest damage. Paper’s environmental footprint cannot be shrunken significantly enough to offset its environmental impact.
Unfortunately, the green movement does not control the publishing industry, which has already established its desire to maintain the status quo of its existing operations. Big Pub quietly supports the logging and paper industries as-is, and has so far responded to escalating paper and production costs by simply passing them on to the consumer. Until the consumer begins to object to the rising cost of their paper products, as well as a concern as to the environmental impact of that paper production, Big Pub has little incentive to alter their processes, and every reason to prop them up as much as possible.
21: The technology—My reader’s better than yours! (Nyah!)
Previous chapters have alluded to an issue that has been as hotly debated as the various formats, the quality of content, the prices, the security, the environmental friendliness and the ethical ramifications of e-books: The technology of e-book reading devices; or, to be more specific, the display screens.
When e-books were beginning to develop, the state-of-the-art in electronic display was the liquid-crystal display, or LCD. LCD technology was just beginning to supplant cathode-ray tube (CRT) technology in computer displays, and was being developed for the second generation of laptop computers (the first generation used monochrome light-emitting diode, or LED, displays). LCDs were more expensive than the venerable CRT display technology, owing to the fact that CRT technology hadn’t changed in decades, and manufacturers had by then found the most cost-efficient ways to produce screens; but LCDs were more portable, and capable of running on significantly less power than a CRT, which made them ideal for laptops.
At first, LCD display technologies were being redesigned almost every other year, requiring manufacturing plants to retool for the new display type on a constant basis. As a result, during the first decade or so of development initial LCD displays were kept very expensive to cover the plant retooling costs. Eventually, though, display technology settled down, plants no longer incurred the high costs of retooling, and LCD displays began to come down in price. By the mid-2000s, LCD displays sold for prices comparable to CRT displays, and soon the sleeker, more power-miserly displays began to replace the CRTs on desktops everywhere.
LCD quality varied greatly during its early history. Some LCDs tended to flicker, especially when viewed under fluorescent light displays. They were also often a harsher, colder light than that created by CRTs. And the image itself was a digital display, made up of pixels that often did not have the apparent resolution of scanned display CRTs. For people who had grown up around CRT displays, many of them had a hard time getting used to the first LCDs. LCD manufacturers responded by adding settings adjustment controls to all LCD displays, allowing the user to adjust the brightness, contrast, hue and saturation, as they could with CRT displays.
Unfortunately, many people did not take the time to adjust their screens, accepting them in their default settings (which were usually set by the factory to look good on a store display shelf), and often not knowing how to operate the control settings, or even realizing they were there. The default settings looked good in the store, but were much harder on the eyes when viewed for long periods of time. Consumers therefore began to complain about the “quality” of LCD screens, and of experiencing eye fatigue and strain. These complaints about LCDs continue to this day, often subjective complaints without hard physical evidence to support them… on the other hand, LCD screen use is a relatively new phenomenon, and the lack of hard data may be more an indication of a lack of time to adequately study the phenomenon, not a lack of veracity. And not everyone’s eyes react similarly to screen use, so LCD screens could hardly be condemned as bad for all human’s visual health.
When the first e-book reading devices were introduced, LCD screens were the displays being developed at the time. Though some consumers took to them right away, those who had previously experienced eyestrain with other LCD screens immediately criticized these devices, saying they could not stare at them for long periods without suffering the same eyestrain they’d experienced with their desktop or laptop monitors. For the same reason, they said, using their laptop or desktop with its LCD display to read an e-book was similarly unacceptable.
LCD manufacturers, hearing the complaints, experimented with new hardware drivers, the software that controlled the display quality of the screens. Microsoft, itself interested in becoming an e-book reseller, responded by developing a new display rendering process designed to address many of the problems reported by consumers. Called Cleartype, the display system replaced the sharp, bitmapped edges of digitally-rendered text on LCD screens with a boundary of progressively-greyed pixels that “softened” the edges of the text. The human eye was essentially fooled into seeing smooth edges and shapes, and did not need to apply the same effort to focusing and interpreting the shapes as letters, so fatigue was lessened.
Today many users of the Cleartype setting, and similar display settings used by non-Windows devices, swear by its quality and comfortable viewing nature, and declare it a vast improvement over early LCD displays. I, myself, have used the Cleartype settings on my desktop, laptop and PDA screens for years, and find it an infinitely better reading experience, allowing me to work with or read on an LCD screen for hours without fatigue or discomfort.
As effective as Cleartype was, however, it was still buried in the display controls of every Windows computer, and remained elusive or invisible to most users. LCDs continued to be considered by a sizeable segment of the population to be hard on the eyes, and not preferable for reading.
~
There was still enough of a concern for the quality of reading electronic displays, not to mention a desire to create a display that used less energy, to keep display research alive. And in the early-2000s, the Electronic Paper Display (EPD) was developed.
Unlike the light-based LCD, LED and CRT displays, EPD was a different animal altogether: A sheet of organized spheres, each sphere holding tiny particles, half of them black and half of them white, in a fluid. The spheres would be charged with a minute current, which would push the desired black or white particles forward to be displayed. Then the current could be turned off, and the particles would stay in place without continuing to use power.
Not only did the EPD displays use less power, but being a composite surface that reflected light to the eyes, as opposed to shining bright lights at the eyes, visually the experience was more like reading ink on paper (also a reflective medium). Users immediately compared the reading of EPD screens to “reading on paper,” and many consumers clamored for the display technology. The technology was soon to be known by its trade name, “eInk.”
Companies like Amazon and Sony saw this as the technological component they finally needed to create well-received e-book readers, as so many consumers still seemed to rail against LCD displays, and as the eInk screens would provide a battery life of literally hundreds of hours to the devices. 2004 saw the first eInk device introduced by Sony… though expensive, its display was lauded by industry and consumers alike. Soon e-book reading devices were coming equipped with eInk technology, and more and more consumers trumpeted the visual quality of their devices. Amazon’s Kindle, also an eInk device, was an instant success, partly because of the lack of complaints consumers had with the display… they were finally willing to read on electronic screens.
~
As the new eInk displays proliferated, many e-book consumers were still reading on older devices, including laptops, PDAs, cellphones, and other devices with LCD displays. Many of these were small, handheld devices with screens of 2x2, 2x3 or 3x4 inches. Reading on these devices was a novel act for most early users, but after awhile, those who enjoyed the advantages of being able to read content on-the-go got used to reading on the smaller form factor… often to the amazement of friends and bystanders. Others did not really like the LCD displays, but for the sake of functionality, put up with them as long as they had to.
The introduction of eInk changed that for many LCD-reading consumers, who snapped up the eInk devices and instantly swore by them. However, there was still a population that was perfectly happy using their existing LCD devices. People began to take sides, creating an ongoing debate as to which display technology was “better.”
Manufacturers watched the debates with interest. Many of them wanted to get in on the e-book market. However, the eInk-based readers and the LCD-based devices used very different hardware systems. Manufacturers wanted a simple plan to follow, a single design on which to base all future manufacturing. But that single design did not seem to be forthcoming, amidst all the arguments for one display or the other, and the young age of the eInk industry. Faced with an uncertain market, some manufacturers bowed out of the device market and set their sights on other markets.
The hardware decision also affected e-book software developers, because they had to be able to create rendering engines for each type of hardware… and those rendering engines often included more complex DRM systems. They likewise wanted to minimize the number of platforms they had to support. Unfortunately for them, the various platforms were already established, and as new hardware types came along, they were immediately pressured to create versions of their format software for each device. Some developers managed this better than others; some only had the resources to support one or two platforms, leaving the rest hanging.
Hardware issues also impacted those who wanted to get in on the e-book action, but could not divine the best way to do that without standardized tools. Among those groups were entities like library systems and educational institutions: Public clamoring for e-books had forced many of them to look into everything from supplementary programs to replacing their printed stock with e-books. But without a standardized piece of hardware to format their programs around and endorse, many found it difficult to impossible to begin a program that would be all-inclusive for their customers.
Eventually this manifested into the current market of e-book formats that are supported on some devices but not others; or that were once supported on a certain device, but since abandoned when the developers shifted their concentration on to newer devices. Some third-party and amateur programmers have filled in a few of the software/hardware gaps, and the emergence of ePub as a default e-book format has simplified the issue for many developers. However, many legacy e-books are not being retroactively converted to newer formats like ePub, and those who have existing e-book collections must keep in mind (and sometimes guess) which devices will allow them to continue enjoying their books in the future.
The format situation remains fractured amongst the many different hardware platforms available to the public… leaving consumers forced to choose what they can and cannot read based on the hardware they have, and which formats it does, or will, support.
22: The marketers—Ads about nothing?
Unlike the established parameters and practices for marketing printed books, both hardback and paperback, there was no such established marketing method for e-books.
Taking a look at past publishing practices, this might seem surprising. Publishers have always concentrated their marketing efforts first on hardback books, their highest-profit item, and generally the first version of a book made available. When paperbacks were introduced, publishers would bring them out after the hardback had largely run its course, and would switch their marketing efforts to promoting the paperbacks. So why don’t the publishers plan to release the e-books after the paperbacks, and switch their promotion efforts to e-books?
The reason is that publishers expect e-books to effectively kill all future sales of their books in one fell swoop. With the issues of piracy prominent in their consciousnesses, publishers assume that the release of the e-book will spread to all interested readers who have not already bought a printed book, making the printed books unsellable at that point. Further, they expect e-books to be heavily pirated and bring in little real income themselves, effectively signaling the end of the point of profitability for a book.
Therefore, publishers like to act like the e-book version isn’t there (in prominent commercial venues), in order to get the most mileage out of their printed books.
Marketers have explored the possibilities of selling e-books to the public. But with publishing’s concerns about the cannibalization of a book’s profitability, much of the marketers’ efforts thus far have been to promote e-books, if at all, in a general sense … and to leave individual books out of it. To an extent, this begins to hurt the publishers themselves, as many individual books and series books could be sold beyond the point at which the printed matter has ceased to sell. But as the publishers continue to believe the e-books will only be pirated, they see no potential for further profit for books or series that have outlived their printed shelf-life.
Marketers either do not seem to be able to come up with selling points that would make e-books attractive products for consumers, or are being restricted from doing so. Many of the public perceptual issues persist, such as the inherent “inferiority” of the reading experience on an electronic display, as opposed to paper. Also, marketers are hampered by the costs of most e-books, set by the publishers and/or sellers, which most consumers reject as being too expensive (especially after they might have paid $2-300 for a reading device). They can hardly consider DRM a selling point, and there is nothing that can be done with an e-book once it is bought… it cannot be resold to others or to used bookstores, or returned for a refund.
In the past, portraying a product as reflecting personal intelligence could be an effective marketing tool. Sadly, though, we seem to be in a period of emotional, as opposed to intellectual, appreciation; being smart, or even appearing smart, does not have the attraction it once had.
There is, of course, always the standby marketing method: Sex. Sex has sold every type of product known to man since the first huckster commented on how good a bolt of hide made a man or woman look to others. Unfortunately, these days there is little to nothing considered sexy about reading. (On the other hand, that has never been an impediment to sex-based sales before...)
There is practicality as a marketing point. E-books have a unique economy of scale that is hard to deny, similar to that which has already proven popular with digital music; surely this would prove to be a popular selling point.
But e-books' economy of scale flies in the face of one of the most established conventions of book ownership: Displaying one's collection. Showing off a collection of cherished or carefully-selected books, on shelves, in studies, or in libraries, has always been considered an integral part of the book-owning experience. To remove this highly-demonstrative convention is practically anathema to modern book owners.
Finally, we are left with sensibility. As described in Chapter 20, the green movement has promoted e-books as being more ecological sensible than printed books. Surely appealing to consumers' desires to be environmentally sound would be a major selling point.
Unfortunately, marketers have not been able to package environmental concerns into an attractive and effective package, much as images of clubbing baby seals managed to turn a major segment of the public against fur. In fact, environmental sensibility is indelibly linked to expectations of hardship and sacrifice; a hard-sell at any time, but especially during difficult economic times. Add to that the higher personal cost, even for environmental sensibility, and you have a doubly-hard product to sell.
So, marketers seem to have none of the usually-effective marketing tools at their disposal—except, perhaps, sex. And considering the product, even that would be a stretch.
~
Marketing had another potential role to play in the e-book arena: Finding ways to augment the profits gained by e-books. As consumers invariably felt e-books were too expensive (and, of course, the Anarchists who insisted they should be free), the marketers had an opening to step in and find another revenue source for e-books, through advertisements… marketing other products through the e-books.
Interestingly, instead of potentially embracing, or at least accepting, ad subsidies for e-books, the consumers screamed their disapproval. Many of them cited the most annoying and invasive of web-based advertisements, pop-ups, animated ads, and flash-based interactive ads, plus the latest television ads that appear on the lower edge of a program while in progress, assuming that the same type of ads would eventually creep their way into e-books’ every page and ruin the reading experience. Although there was little to suggest this would actually happen, there were no suggestions forthcoming from publishers and marketers that it wouldn’t, which only served to justify consumers’ fears.
It might have been expected that some brave publisher would try the ad subsidy idea anyway, trying to place simple and not-so-invasive ads into an e-book, lowering the price (possibly even to zero), and waiting for the public response. But apparently no publisher has been that brave. To date, the only ads generally seen in e-books are those advertising other books by the same publisher, the publisher itself, or possibly the print version of the book just read.
So, with few decent advertising options, and faced with an ad-hostile public, the marketers decided to punt away. As a result, the only e-book related advertisements seen by the public are for the reading devices themselves, with little or no indication as to exactly what can be read on it other than images of the latest bestsellers. What e-book advertisements there are, are generally kept separate from ads for the print versions of a book, and where print versions are advertised, e-book versions are seldom mentioned. You will find them wherever other e-books are sold, however… sometimes so numerous that it is hard to navigate around the ads to find the e-books.
Once upon a time, paperback books were similarly ignored by publishers, until they were established in the publishing stream. It can be inferred that, eventually, e-books will be effectively marketed alongside hardback and paperback books… it is even possible that e-books could become the primary product, largely replacing paperbacks, and with hardbacks being relegated to special gift status. But that day is still far down the road for e-books, and in the meantime, they remain products that, as far as the publishers and their marketers were concerned, they’d rather you didn’t buy.
23: The literature—Prisoner (casualty?) of war
Once literature began being mass-produced in printed form, they were capable of being commoditized. Publishers embraced that idea, and used it as a guideline to transform “literature” into “books.” Printed volumes were what the publishing industry was actually selling: Paper and covers, logged, milled and printed, shipped and handed over for cash, were the products they were dedicated to. The entire money trail was designed and perfected over decades, around the processing and selling of bound paper containers.
When consumers gripe that most publishers aren’t in it for the art, they’re in it for the money and money alone… they are absolutely right. And though most publishers will not say so in public, they are well aware of that fact. Publishers largely did not care about the quality or value of the content, only its raw desirability. The literature certainly had a place in the publisher’s world… but as a mere detail. Titles were treated as extraneous filler, and authors were used as figureheads. It is telling when the publishing industry, and those following it, use authors’ names more often than their actual book titles. They were superstars, box office draws, used to garner sales by name recognition alone.
So, when e-books arrived on the scene, and threatened to bring an end to the “paper trail” that publishers had painstakingly built up and functioned by for so long, they had no idea what to do. Without the paper product, what else was there? How could an industry devoted to selling a physical product deal with an effectively non-physical product? The answer, of course, was that the industry as-is could not transform itself into a seller of a non-physical product. And despite the writing on the wall, the publishing industry wanted to maintain itself as long as humanly possible, just as it was.
The steps the publishing industry has taken to-date, to deal with e-books, have all been oriented towards minimizing e-books’ impact on the existing business model and the physical products they already sold… printed books. And as for the e-books themselves, the publishers concentrated on concepts like DRM that would, in theory, bind the e-books to a physical device, and thereby create a sort of hybrid electronic and physical product that the publishers could still wrap their heads around.
Editors and publishers liked to insist that their efforts were what made a book sellable. Yet, their efforts weren’t devoted to making a story a better story; rather, their efforts were devoted to making a story more sensational, more exciting, more sexy, more sellable. “It’s a good love story, but a sex scene here will make it great!” “What about changing this argument into a chase scene… that will really sell it.” “That villain was good, but the audience will cheer if you kill him off at the end.” Shock and awe was their real motto, because that brought more people in to buy books, just as it sold movies and enjoyed high television ratings.
The authors were well aware of this. They knew how the process worked: All but the most famous of them would write a story, then let the editors tell them how to punch it up so that John or Jane Doe—hopefully all the John and Jane Does—would pick it up at the supermarket. They did not question the editors’ directions; after all, the editors knew the market, that’s why they were the successful Big Pub editors; and that’s how the authors would make more money.
But as e-books slipped into the market, the editors did not come to the writers to tell them how to write their material for the new medium. In fact, they continued to tell them how to write for printed books, and ignore the new medium. And as e-books developed, and the writing styles and voices began evolving to fit the new medium, established Big Pub authors were still churning out print material not at all suited for the new digital formats.
Some authors made it clear that they wanted to embrace the e-book format. Most of them, however, were bound by contract with the publishers, and oft-times, e-book rights had been written into the contract and placed outside of the author’s hands. The author could write e-books anyway… but they would risk losing their contract, and their access to the Big Pub world (and paycheck). As most authors did not want to risk their insider status, or their income, they would back down and turn away from the e-book world to please their publishers.
In the meantime, authors who were unencumbered by contracts began writing for e-books. But even the best of them ran up against that perceptual brick wall, erected by Big Pub, that suggested e-book material was by definition inferior. Then they would face the reality that publishers treated them as “damaged goods” for self-publishing, thereby labeling themselves as inferior. Superior works that might have gone on to be well-publicized bestsellers, with Big Pub assistance, instead languished in an indie book limbo.
This strategy kept best-selling books out of the limelight if they were published in e-book formats. It also continued to keep good e-books down by artificially labeling them and acting to reinforce the stereotype, and effectively maintained e-books’ status as being inferior work.
~
As stated earlier, books were commodities. They were priced according to author popularity, and according to book size (the amount of paper used to print it). They were not priced according to the perceived quality of the story itself; the value of the story was entirely inconsequential to the paper trail and the bottom line.
With e-books’ emergence, the paper trail was suddenly whisked away, leaving nothing to quantify except the work itself, the direct contributions of the author, editors and proofers. It seemed the perfect opportunity to redefine the real worth of literature, based on the literature itself and not its packaging. Such a redefinition could serve to justify e-book pricing to an extent that would satisfy creators and consumers alike, and be the basis for an evolution of an industry to a digital standard.
Yet, the redefinition never quite happened. Publishers and authors were afraid a redefinition would result in smaller salaries and lesser profits for them, while consumers had no confidence that publishers would not overly pad their new financial models and overcharge for e-books. Consumers tried to rough up numbers, but without a real understanding of the industry to go on, their optimistic numbers were meaningless to those in the industry. Further, the Anarchists still insisted that without a physical package, an e-book essentially had zero real value, and should be priced accordingly. Attitudes like this continued to dismay publishers, who were beginning to see no way to satisfy such public perceptions.
Authors also needed this redefinition. They were feeling increasingly marginalized, as discussions continued around them, over them, but never with them. Publishers insisted they were what put the value into books… consumers argued that an e-book without a publisher, therefore, should cost nothing… and no one asked the author what they deserved to get for their work in writing the story. Their efforts to place a value that they considered “fair” on their own e-books inevitably resulted in disagreements with consumers who felt they knew better what profits a writer deserved, and who argued that the “nearly infinite” replication potential of an e-book meant that an author could theoretically make millions off of an e-book, and so had no right to charge beyond a few pennies for it.
And still the story itself, the quality or quantity involved, did not figure into costs. Everything came down to electronic files and the people who revolved around them.
It’s no wonder, therefore, that authors felt a growing disassociation with e-books, a sense that they were being devalued, underappreciated, not a part of the equation… and began to see less and less earning potential from the e-book. And feeling so disassociated, it’s no wonder that so many authors said they’d rather retire than become part of the e-book world. The industry was threatening their livelihood and robbing them of their very spirit to create, at just the time when the industry was set to transform itself and open new opportunities to them.
24: The gurus—When you can snatch the e-book from my hand…
All of the head-butting and arguing of the last 23 chapters was not going on in a vacuum: It was being duly observed by interested authors, editors, programmers, hackers, readers and other e-book enthusiasts around the world.
Unlike many of the participants of the e-book saga, these enthusiasts were truly interested in finding ways to advance e-books in the marketplace, and develop viable financial models for their sale and profitability. Instead of merely arguing, they made a concerted effort to examine the sides, analyze the positions, seek answers to these issues that had been applied to other markets, support positive efforts, and make reasonable and compelling suggestions as to how the market should proceed.
These enthusiasts began to rise to the top of the e-book world, as leaders rise to the top of a mob. Congregating in online forums and blogs, they would join in discussions about various aspects and concerns about e-books, and often dispensed good suggestions for ways around a problem, as opposed to joining in pointless commiserating and finger-pointing. When questions were posed, they would often be the first to provide sensible answers as opposed to snide or sarcastic remarks or opining. When people had complaints, they often tried to mitigate disagreements with observations designed to clarify a problem for both sides.
Some of these enthusiasts were attempting to enter the e-book field themselves, as they saw some advantage it held for them, or the potential for growth. Studying the concerns voiced by others, they made a point of using those concerns to develop business models designed to address those concerns. Bypassing the traditional aspects of publishing business models, they accepted the need for new guidelines and behaviors that were more in-line with the new medium. They became the commercial trailblazers of the e-book field, the e-book gurus.
Individual authors, many without a foothold in the traditional publishing industry, struck out on their own and tailored their literature to the e-book formats. Either working alone or banding together with a few other authors, they created online presences and developed sales models that consumers largely found very reasonable. They also made themselves accessible to their audiences, replicating the “friendly neighborhood shopkeeper” ambiance that customers responded positively to, thereby reducing the desire to take advantage of them.
Other publishers experimented with new selling methods, following the comments made by their prospective customers. Reasoning that a satisfied customer is a return customer (and one that helps drive new business their way), they optimized the e-book buying experience according to what customers had said they wanted, wherever possible.
Fortunately, there were many web-based tools out there that allowed independents and small publishers to customize their experiences as they needed. From elaborate e-commerce setups, to simpler payment gateways through PayPal or Ebay, e-book sellers could choose for themselves how to set up their business, spend a little or a lot, contract out their web setup or do it themselves. This resulted in website designs and functionality that were all across the board, but overall satisfying to the customers, if for no other reason because the customers knew the vendors were trying to please them.
As the gurus developed practical and workable e-book production and selling methods, other authors and vendors took note. Not all of them have followed the examples of the gurus, but those who did enjoyed an almost instant acceptance from the e-book consumers, whereas those who followed more traditional paths were generally treated as luddites, their old-fashioned ways hampering the progress of e-books. Vendors’ concern about their image in the market varied, especially depending on whether they felt it impacted their bottom line or not—and in the case of particularly large vendors, such as Amazon and Sony, much less concern was given to the guidelines of gurus than to the advice of their own staff and the desires of stockholders. They were established corporations, and the opinions of individual outsiders had never been part of their business strategy. Even so, they often paid attention to the voice of the gurus, because very often they were the only voices in some discussions that were clear and impartial, and provided a usable perspective on an issue that a publisher could understand and address.
~
The gurus began to garner a respectable following, and were seen as industry leaders, but only with the small groups that followed them. The number of casual e-book customers, drawn in by Amazon, Sony, Barnes & Noble and the big publishers that were entering the e-book field, were slowly overtaking the gurus and their enthusiasts, until it was unsure whether or not their “wisdom” would continue to have an impact, or whether it would be overshadowed by the corporate giants. It was also unsure whether the plans of the corporate giants would prove to be sustainable over the long haul, raising the possibility that the gurus’ teachings might one day resurface and guide the industry in a new direction.
Presently the industry is dominated by the two factions: The big corporations that go their own, largely traditional, way; and the smaller publishers and individuals, who listen to the gurus for advice and guidance and follow a more modern, progressive path. Consumers are also similarly divided, between the early-adopters, small pubs and indies, who listen to the gurus for advice and guidance; and the casual e-book customers following the commercial successes of the Big Pubs, and who largely are not aware of the public history or support base of the e-book industry.
There seems to be room in the market for both factions, but the ultimate impact the gurus will have is uncertain: It may be that the gurus will still be able to steer indie and small publisher planning, or at least provide a source of opinion to guide it; or the gurus could find themselves pushed out of the picture if the large corporations come to dominate the e-book market and dictate even how the small publishers and indies have to function within the larger market. But even if their influence turns out to be temporary, it is to be hoped that their early influence helped to provide some order to the chaos, and got the e-book market off to the best start it could have hoped for under the circumstances.
Epilogue: The future—Where are my flying books?
It has been in the nature of technological progress that it doesn’t always progress as expected. This has been especially true in entertainment media, which has historically followed in its previous format’s footsteps upon being created, until someone came up with an idea that took it in new and often unexpected directions. For example: Radio initially settled for recording theatre performances, until the idea of reading scripts to portray scenes in a sound stage was conceived; later, early television simply recorded actors in studios reading scripts to an audience, until producers developed teleprompters (to hide scripts), and later created actual sets… ironically, returning television to the roots of theatre, until the ability to shoot in actual locations was developed.
Digital music has likewise shaken up the audiophile world: After decades of music producers and artists developing the “album” concept of ordered singles in a coherent physical package, the MP3 file has brought new meaning to the “mix” concept that actually began in the magnetic tape era, and selling music singly again is becoming the delivery standard.
We should expect some kind of development activity to take place at some point in the e-book area. A few forward-thinking artists and enthusiasts have proposed new digital literature models, including the addition of web links or multimedia, non-linear storylines, combined storylines from multiple sources, storylines that are influenced by readers in realtime, etc, etc. There are probably some future e-book formats that we may see, that we haven’t even conceived of yet.
At the same time, much of the overriding concepts behind these forms of media have remained essentially unchanged by the development of the media: Music remains music, acting remains acting, comics still tell jokes, and people still laugh. The linear narrative that dominates printing can be expected to continue, since it actually existed independently of printing… and in fact, is an art form in itself older than theatre.
So what is the future of e-books likely to be, and how will that impact the future of the media or the technology? We can make short-term predictions that will be fairly accurate, much like predicting faster automobiles that will get better mileage on the road. But when it comes to the more radical ideas—the popular icon of which is the “flying car” of the future—we literally can only guess (and we’re more likely than not to be wrong).
~
There are some major developments we can see happening right now. E-books are becoming the subject of mainstream interest in the media and markets, as more booksellers are opening e-book portals, and more companies are selling reading devices. E-book readers are being highlighted in stores and sales, and their individual characteristics and qualities are being discussed in popular and non-technical publications.
The OEB (ePub) format really is becoming the default format of e-books: With the exception of some of the big-box vendors like Amazon, many of the other large vendors are reconfiguring their reading devices to read ePub, or beginning to sell ePub in their portals. This standardization is aiding the mainstream phenomenon, making it easier for new consumers to find e-books for their devices from multiple sources.
Schools are beginning to look at the practical aspects of e-books vs printed books, and are seeing ways to cut costs, open distribution of books to more people at once, remove the physical load of multiple heavy textbooks on students, and economize on space and infrastructure expenses. In these economically tight times, being able to reduce costs and provide students with a more comprehensive access to library services will enhance educational opportunities and make schools more efficient.
Google’s push to scan out-of-copyright books for online access has resulted in copyrights and user rights coming under review in many countries. Global copyright treaties are being discussed, with an eye to bringing better definition and control to the issues of digital files and their fair use around the world. And countries are beginning to examine their wealth of national literature, with the intention of converting them into digital files themselves and making them available to posterity.
New publishers that have embraced e-books are joining established publishers in the major e-book sales portals. Individual authors are finding more self-publishing tools available to them, and easy access to the online sales stores of companies like Amazon, Barnes & Noble, Sony and others. And as tools improve, the boundary between Big Pub-produced books and self-published novels is becoming thinner and thinner.
Each of these steps serves to bring e-books further into the mainstream consciousness, increase its popularity, and allow the market to influence the future formats and selling paradigms that will become the e-book industry.
~
This developing but still unknown e-book future will have a sizable impact, from publishing industries, to individual artists, and on down to consumers. History has shown that when aspects of an industry change significantly, not all of the older players can manage to adapt to the changes. In fact, some of them will take the position that they only want to do business the traditional way, and will refuse to even try to adapt. Others will try to adapt to the new ways of doing things, but for one reason or another will fail in their attempt, or at least not succeed well enough to stay viable in the industry, and will be forced to close their doors. Some companies will pool resources, merge and reshuffle their operations in order to reshape themselves for the future market. And some brand new companies will form, with business models that would not have made it in the past era, but which can be successful in the new era.
This reshuffling of old publishing corporations, new publishers, and other entities that may not yet be a significant part of the present publishing industry (like the indie authors) will eventually present a new face to the literature industry, and that new face will to varying degrees be ready and able to try new media models and presentation formats, to possibly take e-books in directions we can scarcely fathom today.
The global market will also be changed by e-books, as the old ways of marketing, buying and selling will be changed by the demands of the new media, and the new business methods that will be developed over time. As the e-book market is expected to be more cosmopolitan and internationally-focused over time, we can expect the major players in the industry to be literally anywhere in the world, not just centered in a few existing commercial meccas like New York, Paris or Hong Kong. And as independently-published authors are expected to be a major force in e-books, their presence will further spread the industry out among more players. Where today’s publishing industry may be said to resemble a loose network of very large nodes with a few tenuous connections between them, tomorrow’s publishing world may resemble more of a finely-spread, (appropriately) web-like network of small nodes and tighter networks, with a number of slightly larger nodes interspersed throughout.
Governments will be impacted by this industry. As the market forms, nationalities will have to deal with the global economic models that will develop, and make decisions about the copyrights of their literature, and how they will treat other countries’ copyrights. International laws will be written to cope with the issues, and some countries will see now bonds forming with other countries, perhaps the first of many, thanks to e-books.
The future sales model of e-books is still in flux. Various models are being tested today, but none have risen to the role of a standard, or perhaps they are locally workable but do not take the entire global economy into account. We can expect new sales models to be conceived and tested, and as other global and digital products impact the financial world with unique sales and accounting methods, some of these will further affect e-book sales models. We may have to wait until some entrepreneur develops a new payment transfer system, or for some global markets to adopt other financial bases and standards to be more in line with the rest of the world, as the missing link between digital books and a functioning global market. Or maybe all that is needed is the mass embracing of one existing sales model that can be rolled out to other regions, until everyone is dealing from the same deck. Or maybe all artists will form their own nation and write their own rules… only time will tell.
The development of e-books may also have a direct impact in the development of digital file security, a system that is more concept than effective reality today, but which can be expected to develop over time into a robust and workable system. E-books being as easy to replicate as they are, they could become the front line products on which to test new security systems, the results of those tests on e-books becoming the backbone of further development efforts. They will also be evaluated against other security systems, and quite possibly, the reality of those security systems could impact the development and delivery of the media itself.
~
The ultimate result of this development will be something for all of us to look forward to. New forms of literature will evolve, with new outlets of artistic expression befitting the digital age. Literature in general will be more widely available, transcending physical boundaries and limitations; this will include our classic literature, much of which is fading from the public consciousness as time passes and old volumes become increasingly hard to find. There will be more available personal and public space, as the vast libraries of the past are digitized and stored in devices that fit in our pockets. There will also be a significant savings of global resources, especially of forest products, but we can expect to see improvements on our methods of recovery and recycling of other materials, including precious and toxic metals, spurred out of necessity by increased electronics production. E-books can be a cornerstone of a truly global economy, and a greater dissemination of literature than the world has ever seen. And finally, all of this should result in a more literate, and intelligent, global culture.
I see this as an exciting time to be involved in literature, as it evolves to the next level of development. Opportunities for authors like me, and for established authors, are changing almost daily, and the walls of the old publishing “castles” are crumbling before us, brick by brick. Eventually all of the walls will be gone, and traditionalists and progressives alike will be left staring at the rubble, wondering what we will build next.
It is too hard to say exactly how or when all of this will come to pass, especially given the legacy of twenty years of mis-steps, intentional delays, uncoordinated activity, conflicting agendas, stubborn resistance, corporate greed and laziness, personal selfishness, public defiance and institutional blindness. There was never anything insurmountable about the individual problems faced by the e-book industry: Most of them could have been solved, right up front, with a modest application of rational thought in a moderately unsettled atmosphere. Unfortunately, there were simply too many competing and irrational factions, all creating too much atmospheric turbulence concentrated in one area, creating a social, political, corporate, nationalistic, financial and utopian typhoon right on top of the e-book movement. It’s no wonder that e-books have been so badly savaged over the years, while other digital movements in noticeably calmer waters have sailed right through and made it safely to port.
But bit by bit, this perfect storm is subsiding, and the e-book movement can already begin to see the sun peeking through the clouds. With so many people actively and optimistically working to keep the movement sailing, it is inevitable that e-books will survive this storm and find its way to calmer waters, and a more certain future.
About the Author
Steve Jordan was an enthusiast of the e-book before he was an author of them.
Steve is a self-taught graphic artist and web designer from the Washington, D.C. area, who started writing as a hobby when he began to have trouble finding things he wanted to read! His attraction to the computer era and the development of digital document systems led him to e-books as entertainment, which he loved to carry with him in whatever PDA he was using at the time; and later turned out to be a natural direction for himself as he searched for publishing opportunities. The efforts he’s made to develop his writing, and a consumer-friendly e-book sales model, have earned him fans and accolades in the e-book industry, and his works have been compared side-by-side with professional writers in the industry. His studies of social and technological history give him a unique and realistic perspective of the future that punctuates his writing.
He is urged on by his wife, spoken fondly of by family and friends, and tolerated by his cat.
Other books by Steve Jordan
Verdant Skies
The four satellites in Earth orbit were considered oases for humanity, the first of the habitats humans would move to and escape the growing environmental pressures and hazards of living on the ground.
But when a volcanic eruption threatens to finally ruin Earth's already-collapsing ecosystem, the remaining peoples of Earth demand access to the satellites, and threaten to overwhelm them, thereby dooming all of humanity.
And as the satellites are under siege from below, a desperate gamble for the survival of the satellite Verdant is taken...
Comments from MobileRead members:
"This is your best work so far, Steve ... The people and events depicted are more plausible, more likely to happen in our near-future. This one has that in spades."
"I have never been shy about saying that I'm a Steve Jordan fan, but this was just a quantum leap (forgive the inevitable pun) from your other stuff! You have hit a stride with this book that I hope we see in your work for a good long time!"
"You have a true talent for creating a future that can be envisioned, and Verdant Skies is yet another gem. The characters were interesting, and the science sneaks up on you without hitting you over the head."
"A fantastic read that ended far too soon. I found myself turning the last few pages in the desperate hope that it was not going to end yet."
The Lens (the Kestral Voyages)
THE SECOND KESTRAL VOYAGE: Planet Shura Dva seems to be deliberately resisting and sabotaging the terraforming work of the Oan Engineers. A local workers' leader claims to be able to “feel” the planet's anger... but the Engineers are positive he's really a terrorist leader secretly orchestrating the attacks.
And in the midst of local labor squabbles and strange planetary phenomena, Carolyn Kestral and her crew, flush after a lucrative cargo run, arrive on Shura Dva to help out a friend in need... and discover that the planet itself may not allow them to leave!
Kestral's back, by popular demand! And in this second adventure, Carolyn and the crew of the Mary find themselves caught in the crossfire between Oan terraformers, fanatical workers, and a planet that may actually be sentient— and angry!
Berserker (the Kestral Voyages)
Carolyn Kestral, discharged Galarchy Ranger, begins her new life as a freighter captain and collects a small crew. But there is a question of whether the Berserker virus that forced her Ranger discharge is still capable of being activated and turning her into a deadly human weapon. If it is merely dormant, will it be set off by a clandestine first run and a dangerous run-in with the Spiders in deep space? Will her crew stick around long enough to find out?
"Mr. Jordan has a good sense of action, and a great interest in the minutiae of running a space ship... It's a worthwhile read."
- eBook-Reviews.net
"All in all a thoroughly enjoyable read. If you prefered the original Star Trek series over the later incarnations then you'll love this. 7/10."
- digiReader.com
Chasing the Light
Tom Everson, forced to flee his home during the 2011 oil riots, returns eight years later to find the girl he had to leave behind, start a business and make a life for them both. But he had to sneak into the country illegally, the energy situation has only gotten worse, and the country may break out in more riots at any time! Did Tom arrive at exactly the wrong moment?
A romantic adventure, taking place against the backdrop of America's energy future.
As the Mirror Cracked
The Mirror isn't just another virtual world... it's a worldwide phenomenon, deeply intertwined with real world culture and finances. So when a plot to destroy the Mirror is uncovered, it's serious! It's a race to save the Mirror, and the real world with it, led by a mild-mannered writer and his Mirror "reflection"—the ultimate superhero, Zenith!
Virtual worlds can be fun, even profitable, but if your life depends on one, you'd better make sure it stays up!
Lambs Hide, Tigers Seek
In 2001, heiress Ellen Levinson vanished from a downtown Washington hotel under mysterious circumstances. Five years later, a series of blackmail letters lead investigator Alain Guest to Nashville, and into the local Goth and bondage scene, in search of the missing girl. Will he find Ellen alive... manage to avoid the blackmailers... or will his own fractured psyche finally shatter under the onslaught of such extreme and sexual lifestyles?
My first non-Sci-Fi novel, a noir-style psychological drama with a little mystery thrown in.
"Great, great, GREAT book. Excellent story, really enjoyable characters, and the way you throw in things that us geeks enjoy, like gps, pdas, ebooks etc is really cool. Oh, and i actually cracked up a few times."
-Daniel Mores (www.mores.cc), commentary on PocketPCThoughts.com
Encephalopath
Glen Jansen is seeking to improve his work and prospects when he purchases bleeding edge personal computer technology. But when the tech gives him unexpected access to strange parts of the net, and seemingly to other people's very thoughts, he finds himself on the run from the government, the mob, and a bunch of ersatz terrorist/patriots, all while trying to find out who's really controlling the country's networks!
Somewhere between Johnny Mnemonic and The X-Files is an aspiring architect thrown unwittingly into a national conspiracy! And you thought your workday was a pain!
"...if you are looking for more content in addition to Baen's collections or the olders books from Gutenberg or ManyBooks, this is an excellent new author to give a try."
- MobileRead.com
Worldfarm One
All Keith Maryland wants to do is leave the collapsed United States and start a new life in the U.N.'s ambitious Worldfarm project. But with American prejudice directed at him, demeaning office politics forced upon him, the distrust of his colleagues, and the unwanted attention of local drug smugglers—not to mention a mysterious past that he hopes to leave far behind—he soon wonders whether this was the best, or the worst decision of his life.
Want to know what it's like being an immigrant in a country that doesn't like you? Neither does Keith Maryland. Sometimes, being a foreigner can suck.
Evoguía
A scientist in Atlanta creates a revolutionary breakthrough in accessing the untapped potential in humans, and in so doing, sows the seeds of a war between Homo Sapiens and Homo Evoguía... the Self-Evolved Man...
Who says mucking around with DNA is the only way to change the human body? This story covers three generations and three crises caused by the attempt to improve the species!
"...highly emotive...sure to make the reader consider their own position were they to be placed on either side of the evoguía divide. 9/10."
- digiReader.com
Sol
The Solars are the grunts of the Union... unappreciated and maligned. Even the creation of a new drive engine, capable of taking the Union on an unprecedented trip to the Inner Arm, earns them no respect. But when the Solars discover that an alien race is on its way to take over their ancient homeworld, no race in the Union can stand in their way.
A good old-fashioned space opera, with humans as the underdogs, lots of aliens, exploring other star systems, and an unexpected visit to the homeworld to save it from an impending invasion! Can't beat that with a blaster butt!
"...Once again Steve Jordan has produced a riveting read, part mystery, part sci-fi. And whilst you always know the good guys are going to win in the end there are still plenty of twists to keep you reading breathlessly till the final words. 9/10."
- digiReader.com
Midgard's Militia
Imagine a world of Superheroes: The godlike figures; the daring exploits; the incredible battles; the frightening mayhem; the thrilling victories.
Now imagine a world suddenly without its heroes.
Earth's heroes have just been killed on an outer space mission. And as the deadly force that destroyed them now rushes towards Earth, brave souls come forward to try to take the place of the heroes... to keep the world safe...
"...a fun book, along the lines of the heyday DC/Marvel comic books. The story is much more Clark Kent than Superman and all the more enjoyable for it. 7/10."
- iBme Network.
Factory Orbit
Ted Canter responds to a job offer and ends up at the next stage of the Industrial Revolution: Living and working in orbit. And in the midst of his experiences as a space pioneer, he finds himself at a pivotal moment in history...
A realistic blueprint for the next logical step in Mankind's industrial development—the development of working outposts in Earth orbit, the next frontier. Desire for profit may get us there, but ordinary men and women will make it work.
Robin
When Dr. Morris Cole tries to convince Robin Taft to give up the valuable medical equipment of her late mentor, she disappears literally overnight, equipment and all. Years later, Dr. Cole finds Robin Taft, but with a new name, a new face, and a secret too incredible to believe.
A secret he may not be able to keep.
"...a good premise and a main character with a cat-like personality."
- eBook Reviews.net.
"...an enjoyable and compelling read...8/10. A must for Sci-fi fans."
-iBme Network
Free downloads from the SJB:
The Onuissance Cells
Onus \n (ca.1640): Obligation; Responsibility.
Onuissance \ onn-uh-sonns\ n (ca.2280): Historic period known as The Age of Responsibility.
A series of short stories following the daily lives of the men and women of Midland City, Jewel of Namerica. Primarily centered around the complement of Peacekeepers assigned to the station, led by Commander Thomas Beak, each story is a window upon the new era known as the Age of Responsibility, or Onuissance, and the people who will define that era.
"...worthy of being compared to the Lord of the Rings... This book was brilliant and I would recommend it to any-one who likes sci-fi or human interest stories. I was gripped by both the characters and the environment they lived in, and was left wanting more. Thoroughly enjoyed it!! 8/10”
-digiReader.com
The First Expedition
Follow-up short story to The Onuissance Cells
"They should have turned back at the Moon." That was what they said about the ill-fated First Expedition to Mars. Now Matt Cartier, ex-Midland Peacekeeper, has followed his soul and made it to Mars... and finds one of his first duties is to lead his fellow astronauts to the site of the First Expedition.
Matt Cartier was the man with the heart of an explorer, who took his leave of the Midland Peacekeepers to join the Second Mars Expedition in the first chapter of the Onuissance Cells (Tour of Duty). His mission is a success... but before they can start exploring, Matt must visit the site of the unsuccessful first Mars mission, now 200 years dead.
Denial of Service
In the spirit of the successful USA Network series Burn Notice and Royal Pains comes a new series concept… well, actually, it’s an old concept… specifically, it’s the same concept as Burn Notice and Royal Pains , given a new setting, new characters, a new profession and a new catchy name that suggests the profession and situation of the main character.
M.D. Schitzeiss is suddenly accused of allowing a customer to suffer a denial of service (DOS) attack, and is subsequently blackballed from practicing IT in Baltimore. But this unfortunate circumstance sends him on a trip to San Diego and the support of his brother, his brother’s sexy ex-wife, and people in need that he can apply his IT skills to. As much fun as he has, though, he won’t stop in his quest to find out who ruined his career, and what he needs to do to get it back!
These five short stories mix adventure, humor, sex, and a good shot of geekiness, together into a rollicking mini-series! Soon to be a major television series, unless it only comes out as a free e-book on my site.
See all novels, and more, at www.SteveJordanBooks.com.
Table of Contents
The Lens (the Kestral Voyages)