Sunday, January 12, 2020

The Momentum of Openness - My Journey From Netscape User to Mozillian Contributor

Working at Mozilla has been a very educational experience over the past eight years.  I have had the chance to work side-by-side with many engineers at a large non-profit whose business and ethics are guided by a broad vision to protect the health of the web ecosystem.  How did I go from being on the front of a computer screen in 1995 to being behind the workings of the web now?  Below is my story of how my path wended from being a Netscape user to working at Mozilla, the heir to the Netscape legacy.  It's amazing to think that a product I used 24 years ago ended up altering the course of my life so dramatically thereafter.  But the world and the web was much different back then.  And it was the course of thousands of people with similar stories, coming together for a cause they believed in.

The Winding Way West

Like many people my age, I followed the emergence of the World Wide Web in the 1990’s with great fascination.  My father was an engineer at International Business Machines when the Personal Computer movement was just getting started.  His advice to me during college was to focus on the things you don't know or understand rather than the wagon-wheel ruts of the well trod path.  He suggested I study many things, not just the things I felt most comfortable pursuing.  He said, "You go to college so that you have interesting things to think about when you're waiting at the bus stop."  He never made an effort to steer me in the direction of engineering.  In 1989 he bought me a Macintosh personal computer and said, "Pay attention to this hypertext trend.  Networked documents is becoming an important new innovation."   This was long before the World Wide Web became popular in the societal zeitgeist.  His advice was prophetic for me.

After graduation, I moved to Washington DC and worked for a financial news wire that covered international business, US economy, World Trade Organization, G7, US Trade Representative, the Federal Reserve and breaking news that happened in the US capital.  This era stoked my interest in business, international trade and economics.  During my research (at the time, via a Netscape browser, using AltaVista search engine) I found that I could locate much of what I needed on the web rather than in the paid LexisNexis database, which I also had access to at the National Press Club in Washington, DC.

When the Department of Justice initiated its anti-trust investigation into Microsoft, for what was called anti-competitive practices against Netscape, my interest was piqued.  Philosophically, I didn’t particularly see what was wrong with Microsoft standing up a competing browser to Netscape.  Isn’t it good for the economy for there to be many competing programs for people to use on their PCs?  After all, from my perspective, it seemed that Netscape had been the monopoly of the browser space at the time.

Following this case was my first exposure to the ethical philosophy of the web developer community.  During the testimony, I learned how Marc Andressen, and his team of software developer pioneers, had an idea that access to the internet (like the underlying TCP/IP protocol) should not be centralized, or controlled by one company, government or interest group.  And the mission behind Mosaic and Netscape browsers had been to ensure that the web could be device and operating system agnostic as well.  This meant that you didn’t need to have a Windows PC or Macintosh to access it.

It was fascinating to me that there were people acting like Jiminy Cricket, Pinocchio's conscience, overseeing the future openness of this nascent developer environment.  Little did I know then that I myself was being drawn into this cause.  The more I researched about it, the more I was drawn in.  What I took away from the DOJ/Microsoft consent decree was the concept that our government wants to see our economy remain inefficient in the interest of spurring diversity of competitive economic opportunity, which it asserted would spur a plurality of innovations which could compete in the open marketplace to drive consumer choice and thereby facilitate lower consumer prices.  In the view of the US government, monopolies limit this choice, keep consumer prices higher, and stifle entrepreneurial innovation.  US fiscal and trade policy was geared toward the concept of creating greater open market access to the world markets, while driving prices for consumers lower in an effort to increase global quality of life for all participating economies it traded with.

The next wave of influence in my journey came from the testimony of the chairman of the Federal Reserve Bank in the US congress.  The Federal Reserve is the US Central Bank.  They would regularly meet at the G7 conference in Washington DC with the central bank heads of major economic influencing countries to discuss their centrally-managed interest rates and fiscal policies.  At the time, the Fed Chairman was Allan Greenspan.  Two major issues were top of the testimony agenda during his congressional testimonies in the late 1990’s.  First, the trade imbalances between the US (a major international importer) and the countries of Asia and South America (which were major exporters) who were seeking to balance out their trade deficits via the WTO and regional trade pacts.  In Mr. Greenspan’s testimonies, Congressional representatives would repeatedly ask whether the internet would change this trade imbalance as more of the services sector moved online.
As someone who used a dial-up modem to connect to the internet at home (DSL and cable/dish internet were not yet common at the time) I had a hard time seeing how services could offset a multi-billion dollar asymmetry between US and its trading partners.  But at one of Mr. Greenspan’s sessions with Barney Frank (One of the legislators behind the "Dodd-Frank" financial reform bill which passed post-financial crisis) asked Mr. Greenspan to talk about the impact of electronic commerce on the US economy.  Mr. Greenspan, always wont to avoid stoking market speculation, dodged the question saying that the Fed couldn’t forecast what the removal of warehousing cost could do in impacting market efficiency, therefore markets at large.  This speech stuck with me.  At the time they were discussing Amazon, a book seller which could avoid typical overhead of a traditional retailer by eliminating brick and mortar store fronts with their inventory stocking burdens from products consumer hadn't yet decided they wanted.  Amazon was able to source the books at the moment the consumer decided to purchase, which eliminated the inefficiency of retail.

It was at this time also that my company decided to transition its service to a web-based news portal as well.  In this phase, Mr. Greenspan cautioned against "irrational exuberance" where the stock market valuations of internet companies were soaring to dizzying proportions relative to the future value of their projected sales.  Amid this enthusiastic fervor, I decided that I wanted to move to Silicon Valley to enter the fray myself.  I decided that my contribution would be in conducting international market launches and business development for internet companies.
After a stint working in web development on websites with a small design agency, I found my opportunity to pitch a Japanese market launch for a leading search engine called LookSmart which was replicating the Inktomi-style distributed search strategy.  Distributed search was an enterprise business model called business to business (or B2B) providing infrastructure support for other companies like Yahoo, Excite, MSN, AOL and other portals that had their own dedicated audience or portals.

After my company was reasonably successful in Japan, Yahoo! Japan took interest in acquiring the company, and I moved back to the US to work with Yahoo! on distributing search services to other countries across Asia Pacific.  In parallel, Netscape had followed a bumpy trajectory.  AOL purchased the company and tried to fold it into its home internet subscriber service.  America Online (AOL) was a massively popular dialup modem service in the US at the time.  AOL had a browser of their own too.  But it was a "walled-garden" browser that tried to give users their "daily clicks" like news, weather and email, but didn't promote the open web.  It's easy to understand their perspective.  They wanted to protect their users from the untamed territory of the world wide web, which at the time they felt was too risky for the untrained user to venture out into.  It was a time of a lot of Windows viruses, pop-ups, scams, and few user protections.  AOL's stock had done really well based on their success in internet connectivity services.  Once AOL's stock valuation surpassed Netscape's valuation, they were able to execute an acquisition. 

The team at Netscape may have been disappointed that their world-pioneering web browser was being a acquired by a company that had a sheltered view of the internet and a walled garden browser, even if AOL had been pioneers in connecting the unconnected.  It may have been a time of a lot of soul searching for Marc Andressen's supporters, considering that the idea of Netscape had been one of decentralization, not corporate mergers. 

A group of innovators inside AOL suggested that the threat of a world dominated by Microsoft's IE browser was a risky future for the world of open competitive ecosystem of web developers.  So they persuaded the AOL executive team to set up a skunk-works team inside AOL to atomize the Netscape Communicator product suite into component parts that could then be uploaded into a modular hierarchical bug triage tree, called Bugzilla, so that people outside of AOL could help fix code problems that were too big for internal AOL teams alone to solve.  There is a really good movie about this phase in AOL's history called "Code Rush."

Mozilla project grew inside AOL for a long while beside the AOL browser and Netscape browsers.  But at some point the executive team believed that this needed to be streamlined.  Mitchell Baker, an AOL attorney, Brendan Eich, the inventor of JavaScript, and an influential venture capitalist named Mitch Kapoor came up with a suggestion that the Mozilla project should be spun out of AOL.  Doing this would allow all of the enterprises who had interest in working in open source versions of the project to foster the effort while Netscape/AOL product team could continue to rely on any code innovations for their own software within the corporation.

A Mozilla in the wild would need resources if it were to survive.  First, it would need to have all the patents that were in the Netscape patent portfolio to avoid hostile legal challenges from outside.  Second, there would need to be a cash injection to keep the lights on as Mozilla tried to come up with the basis for its business operations.  Third, it would need protection from take-over bids that might come from AOL competitors.  To achieve this, they decided Mozilla should be a non-profit foundation with the patent grants and trademark grants from AOL.  Engineers who wanted to continue to foster AOL/Netscape vision of an open web browser specifically for the developer ecosystem could transfer to working for Mozilla. 

Mozilla left Netscape's crowdsourced web index (called DMOZ or open directory) with AOL.  DMOZ went on to be the seed for the PageRank index of Google when Google decided to split out from powering the Yahoo! search engine and seek its own independent course.  It's interesting to note that AOL played a major role in helping Google become an independent success as well, which is well documented in the book The Search by John Battelle.

Once the Mozilla Foundation was established (along with a $2 Million grant from AOL) they sought donations from other corporations who were to become dependent on the project.  The team split out Netscape Communicator's email component as the Thunderbird email application as a stand-alone open source product and the Phoenix browser was released to the public as "Firefox" because of a trademark issue with another US company on usage of the term "Phoenix" in association with software. 

Google had by this time broken off from its dependence on Yahoo! as a source of web traffic for its nascent advertising business.  They offered to pay Mozilla Foundation for search traffic that they could route to their search engine traffic to Google preferentially over Yahoo! or the other search engines of the day.  Taking "revenue share" from advertising was not something that the non-profit Mozilla Foundation was particularly well set up to do.  So they needed to structure a corporation that could ingest these revenues and re-invest them into a conventional software business that could operate under the contractual structures of partnerships with other public companies.  The Mozilla Corporation could function much like any typical California company with business partnerships without requiring its partners to structure their payments as grants to a non-profit. 

When Firefox emerged from the Mozilla team, it rapidly spread in popularity, in part because they did clever things to differentiate their browser from what people were used to in the Internet Explorer experience such as letting their users block pop-up banners or customize their browser with add-ons.  But the surge in its usage came at a time when there was an active exploit capability in IE6 that allowed malicious actors to take-over the user's browser for surveillance or hacking in certain contexts.  The US government urged companies to stop using IE6 and to update to a more modern browser.  It was at this time I remember our IT department at Yahoo! telling all its employees to switch to Firefox.  And this happened across the industry.

Naturally as Firefox market share grew, because Mozilla was a non-profit, it had to reinvest all proceeds from their growing revenues back into web development and new features, so they began to expand outside the core focus of JavaScript, browser engines.  As demand for alternative web browsers surged, several Mozillians departed to work on alternative browsers.  The ecosystem grew suddenly with Apple and Google launching their own browsers.  As these varied browsers grew, the companies collaborated on standards that all their software would use to ensure that web developers didn't have to customize their websites to uniquely address idiosyncrasies  of the different browsers consumers had a choice of.

When I joined Mozilla, there were three major issues that were seen as potential threats to the future of the open web ecosystem.  1) The "app-ification" of the web that was coming about in new phones and how they encapsulated parts of the web, 2) The proliferation of dynamic web content that was locked in behind fragmented social publishing environments.  3) The proliferation of identity management systems using social logins that were cumbersome for web developers to utilize.  Mozilla, like a kind of vigilante super hero, tried to create innovative tactics to propose technologies to address each one of these.  It reminded me of the verve of the early Netscape pioneers to try to organize an industry toward the betterment of the problems the entire ecosystem was facing.
To discuss these different threads, it may be helpful to look at what had been transforming the web in years immediately prior.

What the Phone Did to the Web and What the Web Did Back

The web is generally based on html, CSS and JavaScript. A web developer would publish a web page once, and those three components would render the content of the webpage to any device with a web browser.  What we were going into in 2008 was an expansion of content publication technologies, page rendering capabilities and even devices which were making new demands of the web.  It was obvious to us at Yahoo! at the time that the industry was going through a major phase shift.  We were building our web services on mashups of content sources from many different sources.  The past idea of the web was based on static webpages that were consistent to all viewers.  What we were going toward was a sporadically-assembled web.  The concept of the static, consistent web of the 1990s was referred to as "web 1.0" in the web development community.  The new style was frequently called "mash-up" or "re-mix" using multi-paned web pages that would assemble multiple discreet sources of content at the time of page load.  We called this AJAX for asynchronous JavaScript and xml (extensible markup language) that allowed personalized web content to be rendered on demand.  Web pages of this era appeared like dashboards and would be constantly refreshing elements of the page as the user navigated within panes of the site.

In the midst of this shift to the spontaneously assembled dynamic web, Apple launched the iPhone.  What ensued immediately thereafter was a kind of developer confusion as Apple started marketing the concept that every developer wishing to be included in its phones needed to customize content offerings as an app tailored to the environment of the phone.  It was a kind of exclusion where the web developer had to parse their site into smaller sized chunks for ease of consumption in a smaller form factor and different user context than the desktop environment.

Seeing the launch of the iPhone, which sought to combine this wave of the personalized dynamic web, along with the elements of location based content discovery, seemed an outright rethinking of the potential of the web at large.  I was working on the AT&T partnership with Yahoo! at the time when Apple had nominated AT&T to be the exclusive carrier of choice for the iPhone launch.  Yahoo! had done its best to bring access to web content on low-end phones that industry professionals referred to as “feature phones”.  But these devices’ view of the web was incredibly restricted, like the AOL browser of the early web. Collaborating with Yahoo! Japan, we brought a limited set of mobile-ready web content to the curated environment of the NTT Docomo “iMode” branded phones.  We tried to expand this effort to the US.  But it was not a scalable approach.  The broader web needed to adapt to mobile.  No curatorial effort to bootstrap a robust mobile web would achieve broad adoption.

The concept behind the iPhone was to present the breadth of the web itself to the phone of every person.  In theory, every existing webpage should be able to render to the smaller screen without needing to be coded uniquely.  Håkon Wium Lie had created the idea of CSS (the cascading style sheet) which allowed an html coded webpage to adapt to whatever size screen the user had.  Steve Jobs had espoused the idea that content rendered for the iPhone should be written in html5. However, at the time of the phone’s release, many websites had not yet adapted their site to the new standard means of developing html to be agnostic of the device of the user.  Web developers were very focused on the then-dominant desktop personal computer environment. While many web pioneers had sought to push the web forward into new directions that html5 could enable, most developers were not yet on board with those concepts. So the idea of the “native mobile app” was pushed forward by Apple to ensure the iPhone had a uniquely positive experience distinct from the experience every other phone would see, a poorly-rendered version of a desktop focused website.

The adoption of the modern web architecture that existed in html5 hadn't reached broad developer appeal at the time that the market opportunity of iPhone and Android emerged.  Mozilla saw it as a job that it could tackle: The de-appification of the app ecosystem.  Watching this ambitious project was awe inspiring for everyone who contributed to the project at the time.  Mozilla's Chief Technical Officer, Brendan Eich, and his team of engineers decided that we could make a 100% web phone without using the crutch of app-wrappers.  The team took an atomized view of all the elements of a phone and sought to develop a web-interface to allow each element of the device to speak web protocols such that a developer could check battery life, status of motion, gesture capture or other important signals relevant to the mobile user that hadn't been utilized in the desktop web environment.  And they did it.  The phone was app-less with everything running in JavaScript on user demand.  The phones launched in 28 countries around the world. I worked on the Brazilian market launch, where there was dizzy enthusiasm about the availability of a lower cost smart phone based on open source technology stack. 

As we prepared for the golive of the “FirefoxOS” phone launch in Brazil, the business team coordinated outreach through the largest telecommunications carriers to announce availability (and shelf space in carrier stores) for the new phones as I and the Mozilla contributors in Brazil reached out to the largest websites in the country to “consent” to listing their sites as web-applications on the devices.  Typically, when you buy a computer, web services and content publishers aren’t “on” the device, content publishers are just accessible via the device’s browsers.  But iPhone and Android’s trend of “appification” of web content was so embedded in people’s thinking that many site owners thought they needed to do something special to be able to provide content and services to our phone’s users.  Mozilla therefore borrowed the concept of a “marketplace” which was a web-index of sites that had posted their site’s availability to FirefoxOS phone users. 

Steve Jobs was a bit haunted by the app ecosystem he created.  It became a boon for his company, with Apple being able to charge a toll of $.99 or more for content that was already available on the internet for free.  But he urged the developer community to embrace html5 even while most developers were just plopping money down to wrap their web content in the iTunes-required app packaging.  (The iPhone grew out of the Apple Music player project called iPod, which is why every app the phone needed had to be installed from the music player application “iTunes” Apple included on every device it sold for distributing music and podcasts.)  Companies such as Phonegap, and Titanium, popped up to shim web content to the app packaging frameworks required by the Google-acquired Android platform and Apple iTunes.  But the idea of using shims and app wrappers was an inelegant solution to advancing the web’s gradual embracing of the open web. Something needed to change to de-appify the untidy hacks of the Jobs era.  And this is going on to this day. 

Mozilla’s engineers suggested that there shouldn’t be the concept of a “mobile web”.  And we should do everything we can to persuade web developers and content publishers to embrace mobile devices as “1st class citizens of the web.”  So they hearkened back to the concepts in CSS, a much earlier development of web architecture mentioned previously, and championed the concept of device-aware responsive web design with a moniker of “Progressive Web Apps.”  The PWA concept is not a new architecture per se.  It’s the idea that a mobile-enhanced internet should be able to do certain things that a phone wielding user expects it to do.  So a webmaster should take advantage of certain things a user on the move might expect differently from a user sitting at a desktop computer.  PWA work is being heavily championed by Google for the Android device ecosystem now, because it is larger than the iPhone ecosystem, and also because Google understands the importance of encouraging the seamless experience of web content agnostic of which device you happen to possess.

After the launch of the phone, because Mozilla open sources its code, many other companies picked up and furthered the vision.  Now the operating system has been forked into TVs, smart watches, micro-computers and continues to live on in phones under different brand names to this day.  In addition, the project of the atomized phone with hardware elements that can speak https for networking with other devices is now expanded to the current Internet of Things project in Mozilla’s Emerging Technologies group to bring the hardware products we buy (which all speak relatively incompatible radio frequencies) to the common lingua franca of the protocols of the internet.  Not everyone has a Mozilla phone in their pocket. But that was never a goal of the project. 
This brings me to one of the concepts that I appreciate most about Mozilla and the open source community.  An idea can germinate in one mind, be implemented in code, then set free in the community of open source enthusiasts.  Then, anyone can pick it up and innovate upon it. While the open sourcing of Netscape wasn’t the start of this movement, it has contributed significantly to the practice.  The people who created the world wide web continue to operate under the philosophy of extensibility. The founders of Google’s Chromium project were also keen Mozillians. The fact that a different company, with a different code base, created a similarly thriving open source ecosystem of developers aiming to serve the same user needs as Mozilla’s is the absolute point of what Mozilla’s founders set out to promote in my view.  And it echoes those same sentiments I’d heard expressed back in Washington, DC back in the early 1990’s. 

One of the things that I have studied a great deal, with fervor, fascination and surprise, was the concept of the US patent system.  Back in the early days of the US government, Secretary of State Jefferson created the concept of a legal monopoly. It was established by law for the government first, then expanded to the broader right of all citizens, and later all people globally via the US Patent and Trademark Office.  I had an invention that I wished to patent and produce for the commercial market. My physics professor suggested that I not wait until I finish my degree to pursue the project. He introduced me to another famous inventor from my college and suggested I meet with him. Armed with great advice I went to the USPTO to research prior art that might relate to my invention.  Upon thorough research, I learned that anyone in the world can pursue a patent and be given a 17 year monopoly option to protect the invention while the merits of the market fit could be tested. Thereafter, the granted patent would belong as open source, free of royalties to the global community. “What!?” thought I. I declare the goods to the USPTO so they can give it away to all humanity shortly thereafter once I did all the work to bring it to market?  This certainly didn’t seem like a very good deal for inventors in my view.  But it also went back to my learnings about why the government prefers certain inefficiencies to propagate for the benefit of the greater common good of society.  It may be that Whirlpool invented a great washing machine. But Whirlpool should only be able to monopolize that invention for 17 years before the world at large should be able to reap the benefits of the innovation without royalties due to the inventor.

My experiences with patents at Yahoo! were also very informative.  Yahoo! had regularly pursued patents, including for one of the projects I launched in Japan.  But their defense of patents had been largely in the vein of the “right to operate” concept in a space where their products were similar to those of other companies which also had patents or amicable cross-licensing with other organizations that operated in a similar space.  (I can’t speak for Yahoo!’s philosophical take on patents as I don’t represent them. But these opinions stem from how I observed them enforcing their patent rights for formally granted USPTO patents and how they exercised those rights in the market.) I believed that the behaviors of Yahoo!, AOL and Google were particularly generous and lenient.  As an inventor myself, I was impressed with how the innovators of Silicon Valley, for the most part, did not pursue legal action against each other. It seemed they actually promoted the iteration upon their past patents. I took away from this that Silicon Valley is more innovation focused than business focused. When I launched my own company, I asked a local venture capitalist whether I should pursue patents for a couple of the products I was working on .  The gentleman who was a partner at the firm said, paraphrasing: “I prefer action over patents. Execute your business vision and prove the market value. Execution is more valuable than ideas. I’d rather invest in a good executor than an inventor.” And from the 20 years I’ve seen here, it always seems to be the fast follower rather than the inventor who gets ahead, probably precisely because they focus on jumping directly to execution rather than spending time scrawling protections and illustrations with lawyers.

Mozilla has, in the time I’ve worked with them, focused on implementing first in the open, without thinking that an idea needed to be protected separately.  Open source code exists to be replicated, shared and improved. When AOL and the Mozilla project team open sourced the code for Netscape, it was essentially opening the patent chest of the former Netscape intellectual property for the benefit of all small developers who might wish to launch a browser without the cumbersome process of watching out for the licenses for the code provenance.  Bogging down developers with patent “encumbered” code would slow those developers from seeking to introduce their own unique innovations. Watching a global market launch of a new mobile phone based on entirely open source code was a phenomenal era to witness. And it showed me that the benevolent community of Silicon Valley’s innovators had a vision much akin to those of the people I’d witness in Washington DC.  But this time I’d seen it architected by the intentional acts of thousands of generous and forward-thinking innovators rather than through the act of legislation or legal prompting of politicians.

The Web Disappears Behind Social Silos

The web 2.0 era, with its dynamically assembled web pages, was a tremendous advance for the ability of web developers to streamline user experiences.  A page mashed-up of many different sources could enhance the user’s ability to navigate across troves of information that would take a considerable amount of time to click and scroll through.  But something is often lost when something else is gained. When Twitter introduced its micro-blog platform, end users of the web were able to publicize content they curated from across the web much faster than having to author full blog posts and web pages about content they sought to collate and share.  Initially, the Twitter founders maintained an open platform where content could be mashed-up and integrated into other web pages and applications. Thousands of great new visions and utilities were built upon the idea of the open publishing backbone it enabled. My own company and several of my employers also built tools leveraging this open architecture before the infamous shuttering of what the industry called “The Twitter Firehose”.  But it portended a phase shift yet again of the very nascent era of the newly invented social web. The Twitter we knew of became a diaspora of sorts as access to the firehose feed was locked down under identity protecting logins. This may be a great boon to those seeking anonymity and small “walled gardens” of circles of friends. But it was not particularly good for what may of the innovators of web 2.0 era hoped for the greater enfranchisement of web citizenry. 
Many of the early pioneers of the web wanted to foster a web ecosystem where all content linked on the web could be accessible to all, without hurdles on the path that delayed users or obscured content from being sharable.  Just as the app-ified web of the smartphone era cordoned off chunks of web content that could be gated by a paywall, the social web went into a further spitting of factions as the login walls descended around environments that users had previously been able to easily publish and share content across.

The parts of the developer industry that weren’t mourning the loss of the open-web components of this great social fragmentation were complaining about psychological problems that emerged once-removed from the underlying cause.  Fears of censorship and filter-bubbles spread through the news. The idea that web citizens now had to go out and carefully curate their friends and followers led to psychologists criticizing the effect of social isolation on one side and the risks of altering the way we create genuine off-line friendships on the other. 

Mozilla didn’t take a particular stance on the philosophical underpinnings of the social web.  In a way the Bugzilla platform we used to build and maintain Firefox and Thunderbird were purpose-built social networks of collaboration with up-voting and hierarchical structures.  But it was all open source like the code-commits that is housed were. We did have discussions around codes of conduct of our Bugzilla community, geared to ensuring that it remained a collaborative environment where people from all walks of life and all countries could come together and participate without barriers or behaviors that would discourage or intimidate open participation.

But there were certain specific problems that social utilities introduced to the web architecture in terms of the code they required webmasters to integrate for them to be used.  So we focused on those. The first one we hit upon was the idea “log in with x” problem. In the United States, people love to watch race cars go around concrete tracks. They consider it a sport.  One of the most famous racing brands was called Nascar, which was famous for having their cars and drivers covered with small advertisements from their commercial sponsors.  As the social web proliferated, webmasters were putting bright icons on their websites with JavaScript prompts to sign in with five or more different social utilities.  We called this problem “Nascar” because the webmaster never knew which social site a user had an identity registered for. So if a user visited once and logged in with Twitter, and another time accidentally logging in with Facebook, their persona represented on the original site might be lost and irretrievable.  Mozilla thought this was something a browser could help with. If a user stored a credential, agnostic of which source, at the browser level, the user wouldn’t need to be constantly peppered with options to authenticate with 10 different social identities. This movement was well received initially, but many called a more prominent architecture to show the user where their logged identities were being stored.  So Mozilla morphed the concept from “BrowerID” to the idea of a Firefox Accounts tool that could be accessed across all the different copies of Firefox a user had (on PCs, Phones, TVs or wherever they browsed the web.) Mozilla then allowed users to synchronize their identities across all their devices with highly reliable cryptography to ensure data could never be intercepted between any two paired devices. 

Firefox Accounts has expanded to allow users to do synchronize secure session history, browser extensions and preferences, stored passwords (to prevent low risk key-stroke logging for those who were paranoid about that), file transmission with Firefox Send.  Over the years Firefox team has experimented with many common utilities that add to user convenience for leveraging their saved account data. And where Mozilla didn’t offer it, but an addon developer did, the Firefox Account could be used to synchronize those addon-based services as well.

The other great inconvenience of the social web was the steps necessary for users to communicate conventional web content on the social web.  Users would have to copy and paste URLs between browser windows if they wished to comment or share web content. Naturally there was a Nascar solution for that as well: If the web developers for every web site would put in a piece of JavaScript that users could click with a button to upvote or forward content that would solve everything right?  Yeah, sure. And it would also bog down the pages with lots of extraneous code that had to be loaded from different web servers around the internet as well. Turning every webpage into a Frankenstein hodge-podge of Nascar-ed promotions of Twitter and Facebook buttons didn’t seem like and elegant solution to Mozilla’s engineers either!

Fortunately, this was obvious to a large number of the web development and browser community as well.  So the innovative engineers of Mozilla, Google and others put their heads together on a solution that we could standardize across web browsers so that every single website in the world didn’t have to code in a solution that was unique to every single social service provider.  The importance of this was also accentuated with the website of the United States government’s White House integrated a social engagement platform that was found to be tracking the visits of people who visited the web page with tiny code snippets that the White House themselves hadn’t engineered.  People generally like the idea of privacy when their visiting web pages. The idea that a visit to read what the president had to say came along with a tradeoff that readers were going to be subsequently tracked because of that visit didn’t appeal to the site visitors any more than it didn’t appeal to the US government!

To enable a more privacy protecting web, yet enable the convenience users sought of engaging with social utilities, Mozilla’s engineers borrowed a concept from the progressive web app initiative.  PWAs, which were emulating the metaphors of user engagement on phones apps utilized the concept of a user “intent”. Just as a thermostat in a house expresses the thermostat’s setting as a “call for heat” from the houses’ furnace, there needed to be a means for a user to have an “intent to share”.  And as phone manufacturers had enabled the concept of sharing at the operating system level for the applications that users leveraged to express those intentions on the phone, a browser needed to have the same capability. 

At Mozilla we engaged these concepts a “Social APIs.”  An API is an abbreviated term to refer to a kind of hand-shake socket that can interface with another program.  It refers to application program interface. But it generally refers to any socketing capability that can be handled between a hardware, stand-alone software, or web service that can interface with another entity that is not controlled by the originating interface.  Microsoft’s Outlook email software can interface effortlessly with a Google Gmail account using an API if the user of the software authenticates for their program to make such requests to the user’s Gmail account without Microsoft or Google ever having to be directly involved in the authentication the user initiates.  Just as Firefox Accounts could sync services on behalf of a user without knowing any of the details of the accounts the user requested to sync, so too should it be able to recognize when a user wants to share something without having the user dance around between browser windows with copy and paste. 

So Mozilla promoted the concept of browsers supporting share intents, as well as notification intents so that our users didn’t have to always be logged into their social media accounts in order to be notified if something required their attention on any given social media account.  We did this with some great consideration. There was a highly-marketed trend in Silicon Valley at the time around “gamification.” This was a concept that web developers could you points and rewards to try to drive loyalty and return visits among web users. Notifications were heralded by some as a great way to drive the sense of delight for visitors of your website along with the opportunity to lure them back for more of your web goodness, whatever you offered.  Would developers over-notify, we wondered. There was a potential for oversaturation and distraction of user attention which could be a worse cost to the user’s attention and time than it was a benefit for them. 

Fortunately, we did not see huge notification abuses from the sites that supported Social API.  And we did we widespread interest from the likes of Facebook, Twitter, Yahoo!, Google which were the major messaging service providers of the day.  And so we jointly worked to uplevel this to the web standards body called the World Wide Web Consortium (abbreviated as the W3C) for promotion outside the Firefox ecosystem, so that it could be used across all web browsers which supported W3C standards. 

Working with this team I learned a great deal from my peers in the engineering organization.  First I thought, if this is such a great idea, why doesn’t Firefox try to make this a unique selling point of our software?  What’s the rush to standardize this? Jiminy Cricket voices across the organization pointed out, the goal of our implementation of open source code in the browser is precisely to have others adopt the greatest ideas and innovate upon them.  The purpose of the standards organizations we work with was to pass on those innovations so that everyone else could utilize them without having to adopt Firefox-specific code. Good ideas, like the USPTO’s concept of eventual dissemination to the broader global community, are meant to spread to the entire ecosystem so that webmasters avoid the pitfall of coding their website to a the functionality of a single piece of software or web browser.  Mozilla engineers saw their mission as in part being to champion web-compatibility, which they often shortened to webcompat in our discussions at developer events. Firefox is a massive population of addressable users. But we want web developers to code for all users to have a consistently great experience of the web, not just our audience of users. There is abroad group of engineers across Microsoft, Google, Apple, Samsung, Mozilla and many small software developers who lay down the flags of their respective companies and band together in the standards bodies to dream of a future internet beyond the capability of the software and web we have today.  They do this with a sense of commitment to the future we are creating for the next generation of internet, software and hardware developers who are going to follow in the footsteps after us. Just as we inherited code, process and standards from our forebearers. It is the yoke of our current responsibility to pass on the baton without being hampered by the partisanship of our competing companies. The web we want has to be built today if the future generations are going to be set up for success in the demands of the technology environment we will create for tomorrow.

Twice a year the executive team at Mozilla convene the team of people who support the Mozilla Foundation non-profit (and its daughter corporate entity that ships the software) in all-hands meetings where we discuss our part in this shared vision.  Our Chairwoman Mitchell Baker, who managed the Mozilla Project from the spin-out from the AOL organization many years ago gets up on stage to discuss the opportunities she and the foundation see as the web ecosystem evolves.  She speaks in rousing language with phrases like “The Web We Want” in order to instill our team of contributors with an inspiring sense of passion and responsibility. We all go off around the globe as denizens of this mission, carriers of the flag and inspiration, to try to champion and inspire others in turn. 
After one of these events I went off to muse on our projects with one of my mentors, an engineer named Shane Caraveo.  I’d been researching and thinking a lot about all the bluster and buzz that had been happening in the broader internet and press communities about social media platforms.  Facebook had been commissioning studies on the psychological benefits and pitfalls of social media use. I’d listened to their commentaries defending the new paradigms of the social web.  I asked Shane what he thought. Shouldn’t we be championing people go off and build their own web pages instead of facilitating greater facility of leveraging social platforms and tools? Shane pointed out that Mozilla does do that, especially around the Mozilla Developer Network that demonstrates with code examples exactly how to integrate various W3C code specs for website owners, systems administrators and general web enthusiasts.  Shane made a comment that sat with me for years after. “I don’t care how people create content and share it on the internet.  I care that they do.”

The First Acquisition

The standardization of web notifications across browsers one of the big wins of our project.  The other, for Mozilla specifically, was the acquisition of the Pocket platform.  When I worked at Yahoo!, one of the first web 2.0 acquisitions they had made was the bookmark backup and sharing service del.icio.us.  (The name was awkward because many of the companies of the day had given up the idea of paying for overpriced .com URLs in favor of a new surfeit of domains that had become available under the “.us” top level domain name space.) Our Yahoo! team had seen the spreading of the web-sharing trend, pre-Facebook, as one of the greatest new potentials for web publishers to disseminate their content, by the praise and subsequent desire to “re-post” content among circles of friends.  Many years later Yahoo! sold the cloud bookmarking business to the founder of YouTube who sought to rekindle the idea.  But another entrepreneur named Nate Wiener had taken a different approach to solving the same problem.  He’d built addons for web browsers to address the need for cloud-bookmarking.

Saving web content may seem like a particularly fringe use case for only the most avid web users.  But the Pocket service received considerable demand.  With funding from Google’s venture investing arm among others, Nate was able to grow Pocket to support archiving addons for Google’s Chrome browser, Android and iOS phones, and even expand into a destination website where users could browse the recommendations of saved content from other users in a small tight-knit group of curators.  (If this sounds to you like Netscape’s DMOZ project from 20 years ago and del.icio.us from 10 years ago, that was my thought too.)  But it was perhaps the decentralization of Pocket’s approach that made it work so well.  The community of contributors supporting it was web-wide!  And the refined stream of content coming out of its recommendations was very high quality journalism that was in no way influenced by the news publishing industry, which had its own approaches to content promotion.

When I first met the Pocket team, they commented that their platform was not inherently social.  So the constraints of the Social API architecture didn’t fit the needs of their users.  They suggested that we create a separate concept around “save” intents that were not fitting in the constraints of social media intents that the phones and services were pursuing at the time.  When Firefox introduced the “save” function in our own browser, it seemed to be duplicating the concept of the architecture of “Save to Bookmarks”+”Firefox Accounts Sync”.  But we found that a tremendous number of users were keen on Pocket save rather than the sync-bookmarks architecture we already had.
Because Google has already invested in Pocket, I had thought that it was more likely that they would join the Chrome team eventually.  But by a stroke of good fortune, the Pocket team had had a very good experience with working alongside the Mozilla team and decided that they preferred to join Mozilla to pursue the growth of their web services.  This was the first acquisition Mozilla had executed.  Because I had seen how acquisition integrations sometimes fared in Silicon Valley, I had some fascination to see how Mozilla would operate another company with its own unique culture.  Fortunately in my view, Pocket continues to support all browsers that compete with Firefox.  And the active community of Pocket users and contributors continues to stay robust and active to this day.

Protection of Anonymity

One of the most fascinating industry-wide efforts I saw at Mozilla was the campaign behind protecting user anonymity requests and initiatives to enable pseudonymity for users.  As social networking services proliferated in the Web 2.0 era, there were several mainstream services that sought to force users into a web experience where they could have only one single, externally verified, web identity.  The policy was lambasted in the web community as a form of censorship, where internet authors were blocked from using pen-names and aliases (The way Mark Twain authored books under his nom de plume rather than his birth name.)

On the flip side of the argument, proponents of the real-name policy theorized that anonymity of web identities led to trolling behaviors in social media, where people would be publicly criticized by anonymous voices who could avoid reputational repercussions.  This would, in theory, let those anonymous voices say things about others that were not constrained by the normal ethical decency pressures of daily society. 

Wired magazine wrote editorial columns against real names policies saying that users turn to the web to be whomever they want to be and express anonymously ideas that they couldn't without multiple pen-names.   A person’s web identity (Sometimes referred to as “handles” from the early CB radio practice of using declared identities in radio transmissions) would allow them to be more creative than they otherwise would.  One opinion piece suggested that the web is where people go to be a Humpty Dumpty assortment of diverse identities, not to be corralled together as a single source of identity. I myself had used multiple handles for my web pages.  I wanted my music hobby websites, photography website and business websites to all be distinct. In part, I didn’t want business inquiries to be routed to my music website. And I didn’t want my avocation to get tangled with my business either.

European governments jumped in to legislate the preservation of anonymity with laws referred to as “Right to be forgotten” which would force internet publishers to take down content if a user requested it.  In a world where content was already fragmented in a means detached from the initial author, how could any web publisher comply with individual requests for censorship? It wasn’t part of the web protocol to disambiguate names across the broader internet.  So reputation policing in a decentralized content publishing ecosystem proved tremendously complicated for web content hosts. 
Mozilla championed investigations, such as the Coral Project, to address the specific problems of internet trolling when it was targeted to public commenting platforms on news sites.  But as a relatively small player in the broader market, it would have been challenging to address a behavioral problem with open source code.  A broader issue was looming as a threat to Mozilla’s guiding principles. The emergence of behaviorally-targeted advertising that spanned across websites loomed as a significant threat to internet users’ right to privacy. 

The founders of Mozilla had decided to pen a manifesto of principles that they established to keep as the guiding framework for how they would govern projects that they intended to sponsor in the early days of the non-profit.  (The full manifesto can be read here: https://www.mozilla.org/en-US/about/manifesto/) In general, the developers of web software have the specific interests of their end users as their guiding light.  They woo customers to their services and compete by introducing new utilities and conveniences that contribute to the convenience and delight of their users.  But sometimes companies that make the core services we rely on themselves have to outsource some of the work they do to bring the service to us.  With advertising, this became a slippery slope of outsourcing. The advertising ecosystem’s evolution in the face of the Web 2.0 emergence, and the trade-offs publishers were making with regard to end-user privacy, became too extreme for Mozilla’s comfort.  Many outside Mozilla also believed the compromises in privacy that were being made were unacceptable, and so banded together in support of us.

While this is a sensitive subject that raises ire for many people, I can sympathize with the motivations of the various complicit parties that contributed to the problem.  As a web publisher myself, I had to think a lot about how I wanted to bring my interesting content to my audience. Web hosting cost increases with the volume of audience you wish to entertain.  The more people who read and streamed my articles, pictures, music and video content, the more I would have to pay each month to keep them happy and to keep the web servers running. All free web hosting services came with compromises.  So, eventually I decided to pay my own server fees and incorporate advertising to offset those fees.

Deciding to post advertising on your website is a concession to give up control.  If you utilize an ad network with dynamic ad targeting, the advertising platform makes the decision of what goods or services show up on your web pages.  When I was writing about drum traditions from around the world, advertisers may think my website was about oil drums, and it would show ads for steel barrels on my website.  As a web publisher, I winced. Oil barrels aren’t actually relevant to the people who read about African drums. But it paid the bills, so I tolerated it. And I thought my site visitors would forgive the inconvenience of seeing oil barrels next to my drums.

I was working at Yahoo! when the professed boon of behavioral advertising swept through the industry.  Instead of serving semantically derived keyword-matched ads for my drum web page, suddenly I could allow the last webpage you visited to buy “re-targeting” ads on my webpage to continue a more personally relevant experience for you, replacing those oil barrel ads with offers from sites that had been relevant to you in your personal journey yesterday, regardless of what my website was about.  This did result in the unsightly side effect that products you purchased on an ecommerce site would follow you around for months. But, it paid the bills. And it paid better than the mis-targeted ads. So more webmasters started doing it.

Behaviorally targeted ads seemed like a slight improvement in a generally under-appreciated industry at the start.  But because it worked so well, significant investment demand spurred ever more refined targeting platforms in the advertising technology industry.  And internet users became increasingly uncomfortable with what they perceived as pervasive intrusions of their privacy. Early on, I remember thinking, “They’re not targeting me, they’re targeting people like me.”  Because the ad targeting was approximate, not personal, I wasn’t overly concerned.

One day at Yahoo! I received a call.  It had been escalated though their customer support channels as a potential product issue.  As I was the responsible director in the product channel, they asked me if I would talk to the customer.  Usually, business directors don’t do customer support directly. But as nobody was available to field the call, I did.  The customer was actually receiving inappropriate advertising in their browser. It had nothing to do with a Yahoo! hosted page which has filters for such advertising.  But it was caused by a tracking cookie that the user, or someone who had used the user’s computer, had acquired in a previous browsing session. I instructed the user how to clear their cookie store on their browser, which was not a Yahoo! browser either, and the problem resolved.  This experience made me take to heart how deeply seriously people fear the perceived invasions of privacy from internet platforms. The source of the problem had not been related to my company. But this person had nobody to turn to to explain how web pages work. And considering how rapidly the internet emerged, it dawned on me that many people who’ve experienced the internet’s emergence in their lifetime likely couldn’t have had a mentor or teacher tell them about how these technologies worked.

Journalists started to uncover some very unsettling stories about how ad targeting can actually become directly personal.  Coupon offers on printed store receipts were revealing customers purchase behaviors which could highlight details of their personal life and even their health.  Because Mozilla’s principle #4 of the manifesto argued “Individuals’ security and privacy on the internet are fundamental and must not be treated as optional.” They decided to tackle the ills of personal data tracking on the web with the concept of open source code transmitted in browser headers, the handshake that happens between a computer and web server at the start of a browsing session. 
The most savvy web users do know what browser cookies are and where to find them, and how to clear them if needed.  But one of our security engineers pointed out to me that we don’t want our customers to always be chasing down errant irritating cookies and flushing their browser history compulsively.  This was friction, noise and inconvenience that the web was creating for the web’s primary beneficiaries. The web browser, as the user’s delegated agent, should be able to handle these irritations without wasting time of their customers, causing them to hunt down pesky plumbing issues in the preference settings of the software.  The major browser makers banded with Mozilla to try to eradicate this ill. 

At first it started with a very simple tactic.  The browser cookie had been invented as a convenience for navigation purposes.  If you visited a page and wanted to navigate back to it, there should be a “back button” that lets you find it without having to conduct another search.  This was the need the cookie solved.  Every web page you visit sets a cookie if they need to offer you some form of customization.  Subsequently, advertisers viewed a visit to their webpage as a kind of consent to be cookied, even if the visit happened inside a browser frame, called an inline frame (iframe). You visited Amazon previously, surely you’d want to come back, they assumed.  There should be a kind of explicit statement of trust which had been described as an "opt-in" even though a visit to a web destination was in no way a contract between a user and a host. Session history seemed like a good implied vector to define trust. Except that not all elements of a web page are served from a single source. Single origin was a very Web 1.0 concept.  Dynamically aggregated web pages pulled content, code and cookies from dozens of sources in a single page load in the modern web environment. 

The environment of trust was deemed to be the 1st-party relationship between a site a user visits in their web browser and the browser cookie store which was a temporary “cache of history” that could be used in short time frames.  Cookies and other history tracking elements that could be served in iframe windows of the webpage (the portion of the web page that web designers “outsource” to external content calls) were considered outside the environment of user-delegated trust in the 1st party.  They were called “3rd party cookies” and were considered ephemeral.

Browser makers tended to standardize the code handling of web content across their separate platforms by the W3C or other working groups.  And in order to create a standard, there had to be a reference architecture that multiple companies could implement and test. The first attempt at this was called “Do not track”, which was a preference that a user could set in their browser to have trackers quarantined or blocked for certain sessions.  The browser would submit a header file upon visit to a web server that would state the cookie is to be used for the session, not to endure to other site visits on other web pages thereafter. This seemed innocuous enough. It allowed the page architecture to remember the session just so long as necessary to complete the session.  And most viewed that the DNT setting in a browser was a simple enough statement of the trust environment between a web publisher and a visitor for the purpose of daily browser usage. 

All the major browser vendors addressed the concern of the government supervision with the concept that they should self-regulate.  Meaning, they should come to some general consensus that could be used across the industry between browser, publishers and advertisers on how to best serve people using their products and services without having to have government legislators mandate how code should be written or operate.  Oddly, it didn’t work so well. Eventually, certain advertisers decided to not honor the DNT header request. US Congress invited Mozilla to discuss what was happening and why some browsers and advertising companies decided to ignore the user preferences as stated by our shared code in browser headers. 

Our efforts to work in open source via DNT with the other industry parties was not ultimately protecting the users from belligerent tracking. It resulted in a whack-a-mole issue of what we referred to as "finger printing" where advertising companies were re-targeting off of computer or phone hardware aspects or even off the preference not to be tracked itself!  It was a bit preposterous to watch this happen across the industry and to hear the explanations by those doing it. What was very inspiring to watch on the other side was the efforts of the Mozilla product, policy and legal teams to push this concern to the fore without asking for legislative intervention.  Ultimately, European and US regulators did decide to step in to create legal frameworks to punitively address breaches of user privacy that were enabled by the technology of an intermediary.  Even after the launch of the European GDPR regulatory framework, the ensuing scandals around lax handling of user private data in internet services is now very widely publicized and at the forefront of technology discussions and education.

(More to come as the story unfolds)

No comments:

Post a Comment