Wednesday, January 1, 2020

The Momentum of Openness - My Journey From Netscape User to Mozillian Contributor


(Update: Because this post is exceedingly long, I have decided to make it available as a printed book: Momentum of Openness It will remain free to read here.) Insider story behind the cover image: Mozilla's mascot derived from the name of the Mosaic browser and the trademarked name of a large mythical beast from Japanese culture which would rise from the oceans to protect mankind against peril. You may see this mythical creature in Bugzilla, or featured in popular web browsers like Chrome when they are having issues addressing your requests. I like to call it "The Mozilla" because it serves as a protector of all that's good. When I first came to the headquarters of Mozilla, I had to get a picture being bitten by the Mozilla. You'll understand why we feel so affectionately about this symbolic icon as you read the story of my journey to web development below.

Foreword

Shepard Fairey's Dino
Working at Mozilla has been a very educational experience over the past eight years. I have had the chance to work side-by-side with many engineers at a large non-profit whose business and ethics are guided by a broad vision to protect the health of the web ecosystem. How did I go from being on the front of a computer screen in 1995 to being behind the workings of the web now? Below is my story of how my path wended from being a Netscape user to working at Mozilla, the heir to the Netscape legacy. It's amazing to think that a product I used 25 years ago ended up altering the course of my life so dramatically thereafter. But the world and the web was much different back then. And it was the course of thousands of people with similar stories, coming together for a cause they believed in.

The Winding Way West

Like many people my age, I followed the emergence of the World Wide Web in the 1990’s with great fascination. My father was an engineer at International Business Machines when the Personal Computer movement was just getting started. His advice to me during college was to focus on the things you don't know or understand rather than the wagon-wheel ruts of the well trodden path. He suggested I study many things, not just the things I felt most comfortable pursuing. He said, "You go to college so that you have interesting things to think about when you're waiting at the bus stop." He never made an effort to steer me in the direction of engineering. In 1989 he bought me a Macintosh personal computer and said, "Pay attention to this hypertext trend. Networked documents is becoming an important new innovation." This was long before the World Wide Web became popular in the societal zeitgeist. His advice was prophetic for me.

After graduation, I moved to Washington, DC to work for a financial news wire that covered international business, US economy, World Trade Organization, G7, US Trade Representative, the Federal Reserve and breaking news that happened in the US capital. This era stoked my interest in business, international trade and economics. During my research (at the time, via a Netscape browser, using AltaVista search engine) I found that I could locate much of what I needed on the web rather than in the paid LexisNexis database, which I also had access to at the National Press Building.

When the Department of Justice initiated its anti-trust investigation into Microsoft, for what was called anti-competitive practices against Netscape, my interest was piqued. Philosophically, I didn’t particularly see what was wrong with Microsoft standing up a competing browser to Netscape. Isn’t it good for the economy for there to be many competing programs for people to use on their PCs? After all, from my perspective, it seemed that Netscape had been the monopoly of the browser space at the time.

Following this case was my first exposure to the ethical philosophy of the web developer community. During the testimony, I learned how Marc Andressen, and his team of software developer pioneers, had an idea that access to the internet (like the underlying TCP/IP protocol) should not be centralized, or controlled by one company, government or interest group. And the mission behind Mosaic and Netscape browsers had been to ensure that the web could be device and operating system agnostic as well. This meant that you didn’t need to have a Windows PC or Macintosh to access it.

It was fascinating to me that there were people acting like Jiminy Cricket, Pinocchio's conscience, overseeing the future openness of this nascent developer environment. Little did I know then that I myself was being drawn into this cause. What I took away from the DOJ/Microsoft consent decree was the concept that our government wants to see our economy remain inefficient in the interest of spurring diversity of competitive economic opportunity. Many companies doing the same thing, which seemed like a waste to me, would spur a plurality of innovations that would improve with each iteration. Then when these iterations compete in the open marketplace, they drive consumer choice and pricing competition which by a natural process would lower prices for the average American consumer. In the view of the US government, monopolies limit this choice, keep consumer prices higher, and stifle entrepreneurial innovation. US fiscal and trade policy was therefore geared toward the concept of creating greater open access to world markets in an effort to increase global quality of life through "spending power" for individuals in participating economies it traded with. Inflation control and cross-border currency stability is another interesting component of this, which I'll save for a future blog post.

The next wave of influence in my journey to web development came from the testimony of the chairman of the Federal Reserve Bank. The Federal Reserve is the US Central Bank. In the press it is typically just called "The Fed." It is a non-partisan agency that is in charge of managing money supply and inter-bank lending rates which influence the flow of currency in the US economy. They would regularly meet at the G7 conference in Washington DC with the heads of major influential countries to discuss their interest rates and fiscal policies. At the time, the Fed Chairman was Alan Greenspan. Two major issues were top of the testimony agenda during his congressional appearances in the late 1990’s. First, the trade imbalances between the US (a major international importer) and the countries of Asia and South America (which were major exporters) who were seeking to balance out their trade deficits via the WTO and regional trade pacts.  In Mr. Greenspan’s testimonies, Congressional representatives would repeatedly ask whether the internet would change this trade imbalance as more of the services sector moved online.

As someone who used a dial-up modem to connect to the internet at home (DSL and cable/dish internet were not yet common at the time) I had a hard time seeing how web services could offset a multi-billion dollar asymmetry between US and its trading partners.  But at one of Mr. Greenspan’s sessions with Barney Frank (One of the legislators behind the "Dodd-Frank" financial reform bill which passed post-financial crisis) asked Mr. Greenspan to talk about the impact of electronic commerce on the US economy.  Mr. Greenspan, always wont to avoid stoking market speculation, dodged the question saying that the Fed couldn’t forecast what the removal of warehousing cost could do in impacting market efficiency, therefore markets at large. This speech stuck with me.  At the time they were discussing Amazon, a book seller which could avoid typical overhead of a traditional retailer by eliminating brick and mortar storefronts with their inventory stocking burdens. Bookstores allocated shelving space in retail locations for products consumers might never buy. A small volume of sales by inventory therefore would cover the real estate cost of the bulk of items which would seldom be purchased. Amazon was able to source the books at the moment the consumer decided to purchase, which eliminated the warehousing and shelf-space cost, therefore yielding cost savings in the supply chain.

It was at this time that my company, Jiji Press, decided to transition its service to a web-based news portal as well. I worked with our New York bureau team during the process of our network conversion from the traditional telephony terminals we used to new DSL based networks. Because I'm a naturally-inclined geek, I asked lots of questions about how this worked and why it worked better than our terminal style business (which was similar to a Japanese version of Reuters, Associated Press and Bloomberg terminals)

This era came to be called the "dotcom boom" during which every company in the US started to establish a web presence through the launching of web services with a ".com" top level domain name. A highly speculative stock market surge started around businesses that seemed poised to capitalize on this rush to convert to web-based services. Most companies seeking to conduct initial public offerings in this time listed on the Nasdaq exchange, which seemed to have a highly volatile upward trajectory. In this phase, Mr. Greenspan cautioned against "irrational exuberance" as stock market valuations of internet companies were soaring to dizzying proportions relative to the value of their projected sales. I decided that I wanted to move to Silicon Valley to enter the fray myself. It wasn't just the hype of the digital transformation or the stock valuations that enticed me. I had studied the trade pacts of the United States Trade Representative vis a vis the negotiators of the Japanese government. I had a strong respect for the ways that the two governments negotiated to reduce tariffs between the two countries for the benefit of their citizens. I knew that it was unlikely that I could pursue a career path to become a government negotiator, so I decided that my contribution would be in conducting international market launches and business development for internet companies between the two countries.

After a stint working in web development on websites with a small design agency, I found my opportunity to pitch a Japanese market launch for a leading search engine called LookSmart, which was building distributed search engines to power other portals. Distributed search was an enterprise business model, called business to business (B2B), whereby we provided infrastructure support for other portals that had their own direct audiences.

First launch of LookSmart Japanese Search on IsizeI recruited a team to build the initial Japanese search engine index. This was a curated search directory, a fixed database of authoritative websites on given topics, which we could query with boolean search parameters and integrate into client websites. After my team completed the first build of the database, we demonstrated it to Microsoft, which in turn licensed our search engine to power their MSN branded portals. On this news we listed our company on the Nasdaq stock market and planned a global expansion. We thereafter formed a joint venture with British Telecommunications, called BT LookSmart, to expand the LookSmart search distribution world-wide. I relocated to Sydney  as part of the new JV team to build the hosted-search front end pages for our network partners across Asia Pacific region. (With support of a developer team spanning across Australia, Israel and Norway.)  Upon our first site launch, I moved to Tokyo to incorporate LookSmart Japan, hire a local team, and start building a local presence for the company. I turned my focus from product development to prospecting and contracting with local relying business partners, mainly portals.Recruit's Isize portal was the first to partner followed swiftly by other major ISP portal companies (Japan Telecom's ODN, NTT's OCN, KDDI's DION, NEC's Biglobe and Excite Japan) We complemented our offering with an advertising and paid-search indexing service that allowed marketers to promote their sites within our index. Through our revenue  sharing partnerships we would return revenues from this to the portals and ISPs which had partnered with us. Over a years-long process of building customized search portals, and conducting gradual business development expansion of our search content and capabilities, we represented with a broad base of the Japanese internet market. As our back-end servers then processed a significant volume of the total Japanese search query volume, advertisers sought to have their sites indexed in sponsored listings at an ever greater pace. 
 
After we were representing a majority of local ISP search volume, Yahoo Japan took interest in acquiring the company. I moved back to the US to work with Yahoo headquarters to distribute search services to other countries across Asia Pacific. By this time, Google, which had been an algorithmic search provider to yahoo.com decided to break off from Yahoo to launched their own portal service. In this transition, Yahoo licensed its patents to Google so that they could launch a competing paid search offering. Google in turn launched in Japan and began bidding against my former partnerships from the Looksmart Japan network. I thought it was exceedingly generous for Yahoo to empower their own competitors in the interest of a diverse open market ecosystem! While I felt some sense of regret that partnerships I'd worked years to establish were now in contention with new market entrants, I came to understand the wisdom of their decision. And in an ironic twist, Yahoo asked me to go on to pursue competitive bids to counter Google's future offers to our partners thereafter.

Yahoo sent me back to Japan to develop and pitch new enhanced  mobile search services that I hadn’t been able to offer from the LookSmart toolkit. (British Telecommunications did have a mobile app/portal in Japan which I supported as part of BT LookSmart, but our index was made up of traditional PC-focused web content that rendered poorly on small screens.) With Yahoo’s Tokyo-based team, we developed a mobile-site search index that excluded PC-focused content. We built a special mobile-centric advertising platform based on Yahoo’s patented auction-placement advertising platform. We then had a search indexing capability to uniquely query content that was specially designed for the three major cell network companies across Japan at the time. (Japan Telecom, KDDI, and NTT Docomo, another Softbank company.) We were announced in the Nikkei Business Publications' annual almanac for that year as one of the internet market's leading innovations. 

When I'd go to the prospecting meetings on behalf of Yahoo, I'd hear comments in Japanese in the room, "Hey, isn't that the Looksmart guy?" to which I'd humbly ask their ongoing support and consideration. When people asked me to contrast our technologies against Google's I was able to speak from a great deal of experience with them. After all they'd been a partner of Yahoo's and an industry collaborator, even if they were owned by different shareholders. We all had built our distinct offerings on similar techniques of web crawling and algorithmic filtering. And our products leveraged and were protected by the same patents.

Over the course of the ensuing decade, my team continued to expand our licensing and search infrastructure partnerships more broadly. I conducted the search-infrastructure partnerships in Korea, Taiwan, Hong Kong, India, Brazil and Australia with local Yahoo teams. As part of its global expansion in search technology, Yahoo decided to acquire AltaVista, Inktomi, Overture and Fast Search. Eventually, Microsoft decided to give up outsourcing search services and launched their own search engine, branded as Bing, bringing even more competition into the space formerly dominated by Yahoo and Google. Thereafter dozens of subject-specific search providers sprung up to fill niche opportunities in shopping, food and site-specific indexing.

Meanwhile at Netscape

In parallel to my journey, Netscape had followed a somewhat bumpy trajectory. As I'd mentioned in the first chapter, Microsoft had decided to launch a competitor for their Netscape browser Andreessen's team had developed out of Mosaic. While Netscape had fared very well after their successful Nasdaq public offering, the increasing competition from the launch of Microsoft's Internet Explorer pushed the active user base of Netscape Communicator products out of the majority share. The share price of Netscape had fallen to the point that the executive team started to look for strategic options for a new home for the product. Ultimately they received an acquisition bid by American Online, referred to commonly as AOL. AOL had started as a massively popular dial-up modem service in the United States before the rise of DSL and cable broadband internet access.

AOL had a browser of their own too. But it was considered by many to be a walled-garden browser that was tied to the portal service that AOL also owned, which provided their customers with what industry folk referred to as their "daily clicks" like news, weather and email. Portals such as Yahoo and AOL sought to promote certain preferred content for new web users to ensure a positive experience of trusted or licensed content. At the same time, they wanted to protect their users from the untamed territory of the world wide web of the 1990's, which they felt was risky for the untrained user to venture into. (This was a time of a lot of Windows viruses, pop-ups, scams, and few user protections.) AOL's stock had done really well based on their success in internet connectivity services, content aggregation and advertising platform. So they asserted that they could put the necessary resources into growing Netscape back to its former prominence.

The team at Netscape may have been disappointed that their world-pioneering web browser was being a acquired by a company that had a limited view of the internet, even if AOL had been pioneers in connecting the unconnected. It was probably a time of soul searching for Marc Andreessen's supporters, considering that the idea of Netscape had been one of decentralization, not corporate mergers. A group of innovators inside AOL suggested that the threat of a world dominated by Microsoft's IE browser was a risky future for the world of open competitive ecosystem of web developers. 

A small group of engineers inside the company they persuaded the AOL executives to set up a skunk-works team inside AOL to open source the Netscape Communicator product. They achieved this by dividing the product into component parts that could then be uploaded into a modular hierarchical bug triage tree, which they referred to as Bugzilla. By doing this, they theorized that people outside of AOL could help fix code problems that were cumbersome for internal AOL teams to solve. By allowing contributing developers to "fork" (meaning create derivative products) from the open source code base, they would further incentivize innovation, as those developers could compete based on features they introduced to their derivative products. Because the concept was created with a sense of generosity, they believed that most of the innovations would be shared back to AOL. Succeeding with a small fork is hard. But introducing software patches to the complete global audience of AOL/Netscape would benefit all other developers and users in turn. Of some surprise to me, they didn't make a requirement that forks thereafter be maintained as open source. So some of the employees of Mozilla would thereafter leave the company and launch their own browsers to compete directly. This reminded me of Yahoo's previous act of licensing out its innovations to competitors in the interest of a healthy competitive developer ecosystem.
 
(If you are interested in the specific history of this open sourcing initiative, please consider seeking out the documentary about this phase in AOL's history called "Code Rush.")

The Mozilla Project grew inside AOL for a long while beside the AOL browser and Netscape browsers. But at some point the executive team believed that this needed to be streamlined. Mitchell Baker, an AOL attorney, Brendan Eich, the inventor of JavaScript, and an influential venture capitalist named Mitch Kapoor came up with a suggestion that the Mozilla Project should be spun out of AOL. Doing this would allow all of the enterprises who had interest in working in open source versions of the project to foster the effort while Netscape/AOL product team could continue to rely on any code innovations for their own software within the corporation.

A Mozilla in the wild would need resources if it were to survive. First, it would need to have all the patents that were in the Netscape portfolio to avoid hostile legal challenges from outside. Second, there would need to be a cash injection to keep the lights on as Mozilla tried to come up with the basis for its business operations. Third, it would need protection from take-over bids that might come from AOL competitors. To achieve this, they decided Mozilla should be a non-profit foundation with the patent grants and trademark grants from AOL. Engineers who wanted to continue to foster AOL/Netscape vision of an open web browser specifically for the developer ecosystem could transfer to working for Mozilla. (As they announced in their early blog post/announcement: https://blog.mozilla.org/press/2003/07/mozilla-org-announces-launch-of-the-mozilla-foundation-to-lead-open-source-browser-efforts/)

Netscape had created a crowd-sourced web index (called DMOZ or open directory) which had hard-coded links to most of the top websites of the time, aggregated by subject matter specialists who curated the directory in a fashion similar to Wikipedia of today. DMOZ went on to be the seed for the PageRank index of Google when Google decided to split away from powering the Yahoo search engine. It's interesting to note that AOL played a major role in helping Google become an independent success as well, which is documented in the book The Search by John Battelle.

Once the Mozilla Foundation was established (along with a $2 Million grant from AOL) they sought donations from other corporations who were to become active on the Mozilla project. The team split out Netscape Communicator's email component as the Thunderbird email application as a stand-alone open source product. The browser, initially called Phoenix, was released to the public as "Firefox" because of a trademark issue with another US company over the usage of the term "Phoenix" in association with software.

Google, freshly independent from Yahoo, offered to pay Mozilla Foundation for search traffic that they could route to their search engine. Taking revenue share from advertising was not something that the non-profit foundation was particularly well set up to do. So they needed to structure a corporation that could ingest these revenues and re-invest them into a conventional software business that could operate under the contractual structures of partnerships with public companies. The Mozilla Corporation could function much like any typical California company with business partnerships without requiring its partners to structure their payments as grants to a non-profit.

When Firefox version 1.0 was launched, it rapidly spread in popularity. They did clever things to differentiate their browser from what people were used to in the Internet Explorer experience such as letting their users block pop-up banners or customize their browser with extensions. The largest turning point for Firefox popularity came at a time when there was a vulnerability discovered in IE6 that allowed malicious actors to take-over the user's browser for surveillance or hacking in certain contexts. (The vulnerability involved a component called ActiveX.) The US government urged companies to stop using IE6 and to update to a more modern browser. It was at this time I remember our IT department at Yahoo telling all its employees to switch to Firefox. I remember discussing this with my IT team and engineers who ensured me that Firefox plus a Yahoo toolbar was just like the Yahoo browser itself. With this transition Yahoo could give up the burden of keeping their browser up to date, as Mozilla would do all that work and update all users for free on a regular release cadence. 

This word-of-mouth promotion happened across the industry and employees would tell friends and families to switch browsers, or customize their browsers the way they did themselves. You suddenly could get toolbars for any site you wanted that could add bookmarks, design themes and search preferences to the Firefox browser. Mozilla seemed to be doing a lot of work to keep the underlying browser updated. Yet this was a synergistic relationship because all the parties who relied on it would promote Firefox with the might of their own marketing channels and web links that promoted their own browser extensions. It was a perfect symbiotic relationship between otherwise unrelated companies because they were working off of a piece of software that was open source. They could have removed the Firefox brand from the open source browser if they wanted to, and many companies did launch forked browsers replacing the Firefox brand with their own brand. But many liked the brand-trust that Firefox itself had. So they promoted "add to Firefox" instead of trying to replace the user's existing browser entirely.

Because Mozilla was a non-profit, as it grew it had to reinvest all proceeds from their growing revenues back into web development and new features. (Non-profits can't "cash out" or pay dividends to their shareholders.) So they began to expand outside the core focus of JavaScript and browser engines alone. Several Mozillians departed to work on alternative open source browsers. The ecosystem grew suddenly with Apple, Google launching their own open source alternative browser engines on a similar open source model. As these varied browsers grew, the companies collaborated on standards that all their software would use to ensure that web developers didn't have to customize their websites to uniquely address idiosyncrasies of each browser. To this day browsers collaborate to maintain "Web compatibility" across all the different browsers and the extensions model of each so that developers can bring additional product features to the different browsers without having to be built into the browsers themselves.

When I joined Mozilla, there were three major issues that were seen as potential threats to the future of the open web ecosystem. 1) The "app-ification" of the web that was coming about in new smart-phones and how they encapsulated parts of the web, 2) The proliferation of dynamic web content that was locked in behind fragmented social publishing environments, 3) The proliferation of identity management systems using social logins that were cumbersome for web developers to utilize.  Mozilla, like a kind of vigilante super hero, tried to create innovative tactics to propose solutions to address each one of these.  It reminded me of the verve of the early Netscape pioneers who tried to organize an industry toward the betterment of the entire ecosystem. To discuss these different threads, it may be helpful to look at what had been transforming the web in years immediately prior.

What the Phone Did to the Web and What the Web Did Back

The web is generally based on html, CSS and JavaScript. A web developer would publish a web page once, and those three components would render the content of the webpage to any device with a web browser.  What we were going into in 2008 was an expansion of content publication technologies, page rendering capabilities and even devices which were making new demands of the web.  It was obvious to us at Yahoo at the time that the industry was going through a major phase shift. We were building our web services on mashups of content sources. The past idea of the web was based on static webpages that were consistent to all viewers globally. What we were going toward was a sporadically-assembled web tailored to each user individually. The new style of page assembly was marketed as "web 2.0" was frequently called "mash-up" or "re-mix" using multi-paned web pages that would assemble several discreet sources of content at the time of page load.  

We call this AJAX for "asynchronous JavaScript and xml" (xml=extensible markup language) that allowed personalized web content to be rendered on demand to each user. This kind of processing is referred to as "client side" referring to the idea that your computer does the assembling of sources on your machine locally instead of just looking at a page that is entirely rendered on a web server. This is important not only because it off-loads burden on the web server, lowering cost, but it also gives the user added privacy and security protections as only they can see the unique assembly of rendered content. Web pages of this era appeared like dashboards and would be constantly refreshing elements of the page as the user navigated within panes of the site.

In the midst of this shift to the spontaneously-assembled dynamic web, Apple launched the iPhone. What ensued immediately thereafter was a kind of developer confusion as Apple marketed the concept that every developer wishing to be shown on its phones needed to customize content offerings as an app tailored to the environment of the phone. Apple didn't remove their Safari browser, but they diverted attention to stand-alone app frameworks discovered through the iTunes App Store. The “tunes” part of the iPhone was because it was a derivative product of the earlier-launched MP3 player by Apple called iPod, so everything on the device including software and "Podcasts" had to be synced with a PC through the music player. This was a staging strategy they replaced once the iPhone became a leading brand for the business. Much of this has changed in Apple's new architecture. Nowadays, the app marketplace, music app and podcasts app are all stand-alone products in their own right and AppleID has been "set free" to have other purposes specific to account management and wallet-specific uses.

It seemed strange that users would no longer view content on Apple devices using URLs but rather by downloading individual snippets of content into each developer's own isolated content browser on the user's iOS device. It wasn't just the developers who were baffled. It was the users too! It took a lot of marketing on Apple's part to get people familiar with an entirely new frame of thinking. They had to get people to stop going to their competitors tools to search the web, but instead to have them think "There's an app for that!" as the Apple advertising slogan went. Apple wasn't just trying to confuse the market with this strategy. There are benefits to sand-boxing (meaning to metaphorically isolate a play area from the clean environment around it) different content sources from each other from a privacy and security standpoint. That's what the different frames of AJAX web pages did also. This just took the sand-boxing to an extreme. Apple engineers knew they were going to have a challenging time safeguarding a good user experience on their new phones if there were risks of conflicting code from different programs accessing the same hardware elements at the same time. So the app construct allowed them to avoid phone crashes and bugs by not letting developers talk to each other inside the architecture. Making the developers learn an entirely new coding language to build these apps was also done with a positive intent. They introduced new context-specific frameworks and utilities that were specific to a user-on-the-go. These common frameworks provided consistency of user interface design that was specific to the Apple brand image. Also developers could save time time and cost if they did not need to create these common utilities and design elements from scratch. Theoretically a designer could build an app without the help of an engineer. An engineer could built an app without the help of a designer. It was an efficiency play to maximize participation by abstracting away the complexity of certain otherwise-mundane product concepts.

Seeing the launch of the iPhone capitalize on the concept of the personalized dynamic web, along with the elements of location based content discovery, I recognized that it was an outright rethinking of the potential of the web at large. I had been working on the AT&T partnership with Yahoo at the time when Apple had nominated AT&T to be the exclusive mobile carrier for the iPhone launch.  Yahoo had done its best to bring access to web content on low-end phones feature phones. These devices’ view of the web was incredibly restricted. In some ways they felt like the AOL browser. You could get weather, stocks, sports scores, email, but otherwise you had difficulty navigating to anything that wasn't spoon-fed to you by your phone's carrier. 

Collaborating with Yahoo Japan, we'd stretched the limit of what a feature phone could do with mobile-ready web content.  We must tip our hats to all that NTT Docomo “iMode” did with feature phone utilities before the touch-screen graphical user interface became mainstream. Japanese customers were amazingly versatile in adapting to the constraints of mobile phones. Users would write entire novels in episodic chapters for mobile phone consumption by commuters! But even at Yahoo we were falling over ourselves trying to help people re-format their sites to fit the small screen. It wasn't a scalable approach though it bridged users through the transition as the web caught up.

The concept behind the iPhone was to present bite sized chunks of web content to the phone specific to the user need in the moment. Breadth was not an advantage in a small screen space at a time when the user probably had limited time and attention. In theory, every existing webpage should be able to render to the smaller screen without needing to be coded uniquely.  

Håkon Wium Lie had created the idea of CSS (the cascading style sheet) which allowed an html coded webpage to adapt to whatever size screen the user had. Steve Jobs had espoused the idea that content rendered for the iPhone should be written in html5. However, at the time of the phone’s release, many websites had not yet adapted their site to that new standard means of developing html to be agnostic of the device of the user. Web developers were very focused on the then-dominant desktop personal computer environment. While many web pioneers had sought to push the web forward into new directions that html5 could enable, most developers were not yet on board with those concepts. So the idea of the “native mobile app” was pushed forward by Apple to ensure the iPhone had a uniquely positive experience distinct from what every other phone would see, a poorly-rendered version of a desktop-focused website.

I don't know any of the team at Apple who made these choices. But I have the sense that I understand the motives of what they were trying to do. In the developer community we have a term for "hacking" or "shimming" solutions to a code problem. This is generally meant in a positive connotation of making-do with what you have to achieve the outcome to address the user need. It's synonymous in our lingo with "improvising" but is not to be confused with the concept of malicious activity to undermine someone or something.  (That is the general cultural adoption of the term.) When a hack or a shim is used to make a project fit the project or product's expected acceptance criteria, there is a general understanding that, once time allows, the shim will be removed for a more exact solution. So in my view, Apple shimmed the iPhone project and layered on the unwieldy scaffolding with the expectation and hope that over time those constructs could be removed from the code and replaced with a more user-friendly solution.

Mozilla engineers shared the concerns that Apple had about user pain points of using the web in a mobile context. (I'll speak generally here because there was no explicit company stance about Apple's approaches on this issue. But there was general sentiment that web developers needed some help on the html5 front.) Mozilla aspired to help the mobile adaptation of the web by conducting outreach on the ideas of "responsive web design" which is at the core of html5's purpose. This may sound esoteric. But what it means is that you should leverage your URL as if it is a conversation with the user. A user's browser transmits what's called a header in their http request to visit your site. Ideally, a webpage should listen to who's calling before it answers. If the call comes from a Safari browser on an iPhone, or a Chrome browser on an Android device, your site should be "responsive" in that it returns a mobile-styled format using CSS to respect the limits or specific elements of your page that are relevant to the user's context.

First Firefox OS Phone

The Mozilla team envisioned a phone unbound from the app ecosystem. Mozilla's Chief Technical Officer, Brendan Eich, and his team of engineers decided that they could make a web phone using JavaScript and html5 without using the crutch of app-wrappers. The team took an atomized view of all the elements of a phone and sought to develop a web-interface (called an API for application program interface) to allow each element of the device to speak http web protocols such that a developer could check battery life, status of motion, gesture capture or other important signals relevant to the mobile user that hadn't been utilized in the desktop web environment. And they succeeded! The phones launched in 28 countries around the world.


Christopher presenting at FISL 13
I worked on the Brazilian market launch of the FirefoxOS phones, where there was dizzy enthusiasm about the availability of a lower-cost smart phone based on open source technology. As we prepared for the go-live of the launch in Brazil, the business team coordinated outreach through the mobile carriers to announce availability (and to prepare shelf space in carrier stores) for the new phones. Our events planning team coordinated speaker appearances for us at the developer conferences where we'd speak about html5 for mobile devices. This way we'd have dozens of major Brazilian portals and services optimized for viewing on mobile browsers in time for the upcoming launch.

Fabio Magnoni discussing Firefox OS in Brazil
Mozilla contributors in Brazil reached out to the largest websites in the country to consent to listing their sites as web-apps on the new devices. Typically, when you buy a computer, web services and content publishers aren’t on the device, content publishers are just accessible via the device’s browsers. But the iPhone and Android trend of specific apps for web content was so embedded in people’s thinking that many site owners thought they needed to do something special to be able to provide content and services to our phone’s users. Mozilla therefore borrowed the concept of a “marketplace” which was a web-index of sites that had posted their site’s availability to FirefoxOS phone users. Users then put bookmarks to the web-apps on their home screen much like people were used to in Apple and Android devices. Later Apple Safari and Google Chrome also pushed for home-screen bookmarks in lieu of native apps. But the conventional behavior of users buying software constantly instead of relying on the same developer's website continues to this day.

Mozilla’s engineers suggested that there shouldn’t be the concept of a “mobile web” that was distinct or limited from the broader web.  We should do everything we could to persuade web developers and content publishers to embrace mobile devices as “1st class citizens of the web.”  So they hearkened back to the concepts in CSS, and championed the concept of device-aware responsive web design with a moniker of “Progressive Web Apps.” The PWA concept is not a new architecture. It’s the idea that a mobile-enhanced URL should be able to do certain things that a phone-wielding user expects it to do. Even though Mozilla discontinued the phone project eventually, the PWA work is being heavily championed by Google for the Android device ecosystem now.

The first Firefox OS powered TV demonstrated in Barcelona
After the launch of the phone, because Mozilla open sources its code, many other companies picked up and furthered the vision. Panasonic forked the code to power their SmartTV, which was able to surf the web without needing an attached computer. Kickstarter campaigns launched to fork the code into a web-enabled smart-watches. New phone manufacturers forked the code to support feature phones, and various versions for RaspberryPi devices allowed the operating system to be used to power web-based picture frames and other devices for the "Internet of Things" developer initiatives. 

The Mozilla html phone launch wasn't intended to overtake the major handset distribution. It was intended to make the point that web browsers were capable of doing the same things that an app wrapper would, so long as developers had access to the capabilities of the device over web protocols. This is one of the most admirable things about the philosophy of Mozilla and the open source community that supports it. An idea can germinate in one mind, be implemented in code, then set free in the community of open source enthusiasts who iterate and improve on it. There might be no PWA capability on iPhone and Android devices if Mozilla hadn't tried to launch a phone that had only PWAs. While the open sourcing of Netscape wasn’t the start of the open source movement, it has contributed significantly to the practice. The people who created the world wide web continue to operate under the philosophy of extensibility. The founders of Google’s Chromium project were also keen Mozilla supporters, even though launching a separate open source browser made them competitors. (Remember the Netscape open sourcing had specifically been for the purpose of spurring competitors to action.) The fact that a different company, with a different code base, created a similarly thriving open source ecosystem of developers aiming to serve the same user needs as Mozilla’s is the absolute point of what Mozilla’s founders set out to promote in the spin-off from Netscape. And it echoes those same sentiments I’d heard expressed back in Washington, DC back in the late 1990s.

One of the things that I have studied a great deal, was the concept of the US patent system. Back in the early days of the US government, Secretary of State Jefferson created the concept of a legal monopoly. It was established by law for the government first, then expanded to the broader right of all citizens, and later all people globally via the US Patent and Trademark Office. When I was in college I had an invention that I wished to patent and produce for the commercial market. My physics professor suggested that I not wait until I finish my degree to pursue the project. He introduced me to another famous inventor from my college and suggested I meet with him. Armed with great advice I went to the USPTO to research prior art that might relate to my invention.  Upon thorough research, I learned that anyone in the world can pursue a patent and be given a 17 year monopoly to protect the invention while the merits of the product-market-fit could be tested. Thereafter, the granted patent would belong as open source, free of royalties to the global community. “What!?” I thought. "I am to declare the goods to the USPTO so they can give it away to all humanity shortly thereafter once I did all the work to bring it to market?  This certainly didn’t seem like a very good deal for inventors in my view.  But it also went back to my learnings about why the government prefers certain inefficiencies to propagate for the benefit of the greater common good of society.  It may be that Whirlpool invented a great washing machine. But Whirlpool should only be able to monopolize that invention for 17 years before the world at large should be able to reap the benefits of the innovation without royalties due to the inventor.

My experiences with patents at Yahoo were also very informative. Yahoo had regularly pursued patents, including for one of the projects I'd launched in Japan. But their defense of patents had been largely in the vein of the “right to operate” concept in a space where their products were similar to those of other companies. I believed that the behaviors of Yahoo, AOL and Google were particularly generous and lenient in patent protection. As an inventor myself, I was impressed with how the innovators of Silicon Valley, for the most part, did not pursue legal action against each other. It seemed they actually promoted the iteration upon their past patents. I took away from this that Silicon Valley is more innovation focused than business focused. When I launched my own company, I asked a local venture capitalist whether I should pursue patents for a couple of the products I was working on. The gentleman, who was a partner at the firm said, “I prefer action over patents. Execute your business vision and prove the market value. Execution is more valuable than ideas. I’d rather invest in someone who can execute rather than someone who can just invent.” And from the 20 years I’ve seen here, it always seems to be the fast follower rather than the inventor who gets ahead, probably precisely because they focus on jumping directly to execution rather than spending time scrawling protections and illustrations with lawyers.

Mozilla has, in the time I’ve worked with them, focused on implementing first in the open, without thinking that an idea needed to be protected separately. Open source code exists to be replicated, shared and improved. When AOL and the Mozilla team open sourced the code for Netscape, it was essentially opening the patent chest of Netscape intellectual property for the benefit of all small developers who might wish to launch a browser without the cumbersome process of watching out for the licenses for the code base.  Bogging down developers with patent encumbered code would slow those developers from seeking to introduce their own unique innovations. Watching a global market launch of a new mobile phone based on entirely open source code was a phenomenal era to witness. And it showed me that the benevolent community of Silicon Valley’s innovators had a vision much akin to those of the people I’d witness in Washington, DC.  But this time I’d seen it architected by the intentional acts of thousands of generous forward-thinking innovators rather than through the act of legislation or legal prompting of politicians.

The Web Disappears Behind Social Silos

The web 2.0 era, with its dynamically assembled web pages, was a tremendous advance for the ability of web developers to streamline user experiences.  A page mashed-up of many different sources could enhance the user’s ability to navigate across troves of information that would take a considerable amount of time to click and scroll through.  But something is often lost when something else is gained. One of the boons of the web 1.0 era was the idea that a website was relatively static set of components that were hosted on a URL on a given day. So my search engine company Looksmart could have a fairly authoritative index of the entire internet as it would look to any user across the world. One fascinating project that continues to this day is a search engine that retains a static view of what every major URL looked like on the day it was indexed. So today you can see what Mozilla's webpage looked like in 1999. I often wondered what the advent of web 2.0 would do to the Internet Archive. In 100 years would it seem that history of web archiving stopped abruptly when users went from static html to dynamic profile-based page loads? One thing was for certain, it seemed that there would no longer be a consistent view of the web across the world. Users in certain countries are now excluded from access to content that the same user would be able to see if their IP address was determined to be a local to the site host. In many cases this has nothing to do with the opinion of the local government. Rather it had to do with whether the specific site host believes it is important to make content available to those outside the country or perhaps a deliberate decision to exclude international users because the site hosting costs were too great to support them, or because of some content licensing restriction for content featured on their domain. This is a bit of a sad impact to the freedom of information access globally, especially to people in countries where it is hard to get access to content locally. But it is also a kind of degradation of what the web's pioneers had intended. Retrospectively, history most likely show a more narrowed web in the archive as all the dynamic pages will not be saved. What happened there is robustly logged and tracked in the moment. But it won't be reassembled for viewing in the future. It is now a much more ephemeral art.

When Twitter introduced its micro-blog platform, end users of the web were able to publicize content they curated from across the web much faster than having to author full blog posts and web pages about content they sought to collate and share. Initially, the Twitter founders maintained an open platform where content could be mashed-up and integrated into other web pages and applications. Thousands of great new visions and utilities were built upon the idea of the open publishing backbone it enabled. My own company and several of my employers also built tools leveraging this open architecture before the infamous shuttering of what the industry called “The Twitter Firehose”. But it portended a phase shift yet again of the very early social web. The Twitter we knew of became a diaspora of sorts as access to the firehose feed was locked down under identity protecting logins. This may be a great boon to those seeking anonymity and small “walled garden” circles of friends. But it was not particularly good for what may of the innovators of web 2.0 era hoped for, the greater enfranchisement of web citizenry.

The early pioneers of the web wanted to foster an ecosystem where all content linked on the web could be accessible to all, without hurdles on the path that delayed users or obscured content from being sharable. Just as the app-ified web of the smartphone era cordoned off chunks of web content that could be gated by a paywall, the social web went into a further splitting of factions as the login walls descended around environments that users had previously been able to easily post, view and share.

The parts of the developer industry that weren’t mourning the loss of the open-web components of this great social fragmentation were complaining about psychological problems that emerged once-removed from the underlying cause. Fears of censorship and filter-bubbles spread through the news. The idea that web citizens now had to go out and carefully curate their friends and followers led to psychologists criticizing the effect of social isolation on one side and the risks of altering the way we create genuine off-line friendships on the other.

Hugh Finnan and Todd Simpson demo browser WebRTC
Mozilla had a couple of its own forays into social tools as well. Firefox team integrated a utility that allowed for in-browser chat sessions over a new privacy-preserving protocol called WebRTC (abbreviation for Real Time Communication) and collaborated with Chrome developer team such that WebRTC sessions between Firefox and Chrome enabled any user to connect between the two browsers without having to download a chat app. This concept seemed to work so well that the Firefox team decided to deploy a built-in communications utility to enable quick and convenient WebRTC calls from Firefox to any of the other modern web browsers. Branded as “Firefox Hello” the utility enabled free web streaming conversations like those offered by the Skype application.

Shared web session in Firefox Hello
Firefox Hello didn’t have the concept that your contacts needed to be using the same software as you, so long as their browser also was current and supported WebRTC links. Because it was ubiquitously available, we didn’t need to have a built in address-book. Typically, native address books in applications are used because only those people with the same software as you are compatible as contacts. But if absolutely everyone can use the general internet, there is no need for narrowly defined contact lists.Exclusive networks are advantageous for marketing purposes. For instance, social networks like to solicit users to join certain “clubs” sheerly because other of their friends are doing so. But this often doesn’t come at a specific benefit to the new user. Those might be seen as vectors to “lock in” to grow audiences for a product, a strategy we’d seen rife in app development these days. But Mozilla is more enthusiastic to see software utilities grow by virtue of their benefits, not their limitations. Mozilla aspires to show the way to avoid the crutches of app-development such as the profligate porting of user data between multiple locations, which is cumbersome, potentially risky and hard to understand for most users.

Part of the benefit of WebRTC was that the connection over internet protocol enabled use cases beyond just staring at web camera images. Our users could share content in real time such as webpages they were viewing, pictures and potentially web streaming. But we didn’t have a concept of file storage or transfer. We knew that this was a particular pain point for our users. For instance if they wanted to share pictures, but couldn’t do it over a web viewing session in Firefox Hello, they typically had to be sent via an email host. Most email hosting services only permitted sharing a few pictures, because of message file limits. Even in Thunderbird users could only share as many megabytes in a message as their email host permitted.

To address this, Mozilla launched Firefox Send, which solved the underlying issue of storage limitations that hindered the capacity of traditional webmail services or bloated the cost of cloud hosting services users would need to have for web sharing as work-arounds for sharing between users where Thunderbird or Outlook couldn’t assure the file transfer. 

The past decade has drastically changed the general concept of what is a social network when it comes to the web. Many people don't think of Mozilla specifically as a social network. But when the AOL team created the Bugzilla platform for open sourcing the code tree of Netscape, they essentially had created a highly-efficient purpose-built social network for collaboration in software. In our community and company we had discussions around codes of conduct to foster and enforce the well-being of our community via our social platforms. We had suffered incidents of trolling, inappropriate political commentary, inappropriate biased language and all the general problems that later arose in other web social platforms. But we dealt with them by engaging and dialoging about them with the community participants. We didn’t algorithmically try to solve something that was a fundamental human decency issue. We faced it directly with discussions about decency and agreements about our shared purpose. We viewed that we were custodians of a collaborative environment where people from all walks of life and all countries could come together and participate without barriers or behaviors that would discourage or intimidate open participation. Some people in the community opined that if Mozilla had continued expanding its social networking initiatives around community tools, communication and services, they might have had some good demonstrable examples of how to address ills that other public social networks face today.

The movement of the web-based social network resulted in a new concept of content indexing and display in an atomized way. A personalized webpage looks radically different to every individual viewing it. Some of these shifts in content discovery were beneficial. But some of the personal data that powered these webpages led to data breaches and loss of trust in the internet community. Mozilla’s engineers thought they could do a great deal to improve this. One of the things the Mozilla team thought it could address was the silo and lock-in problem of web authentication.

The social web services introduced a Babel-like plethora of social logins that may have been a boon for the creation of quick login options for an emerging startup. But it came with a mix of problems for the web at the same time. Website authors had to go through particular effort to integrate multiple 3rd party code tools for these social logins to be used. So Mozilla focused on the developer-side problem. As the social web proliferated, webmasters were putting many bright icons on their websites with prompts to sign in with five or more different social utilities. It could lead to confusion because the site host never knew which social site a user had an identity registered through. So if a user visited once and logged in with Twitter, and another time accidentally logging in with Facebook, their personalized account content might be lost and irretrievable.

Mozilla's engineers thought this was something a browser could help with. If a user stored a credential at the browser level, the user wouldn’t need to be constantly peppered with options to authenticate with 10 different social identities. By initial design, “BrowserID” allowed a user to store a logged-attribute from any user-nominated email host then assert that identity to a website that supported the OpenID architecture, which was the underlying mechanism of all those social logins. Firefox, or any derivative fork of its code, didn't transmit the identity the user chose to any central repository. It was part of Mozilla had a privacy-by-design principle for its product development. Our rule was not to transport user data to anyplace it didn't need to be. A key benefit of BrowserID was that it operated as a client-side tool keeping one's private data in the control of the user, and eliminating the excessive use of passwords that were beginning to be a point of privacy vulnerability on the web.

This movement was well received initially, but many supporters called for a more prominent architecture to show the user where their logged identities were being stored. So Mozilla morphed the concept from a host-agnostic tool to the idea of a client-agnostic "Firefox Account" tool that could be accessed across all the different copies of Firefox a user had (on PCs, Phones, TVs or wherever they browsed the web) and even be used in other browsers or apps outside of Mozilla's direct control. With this cloud service infrastructure users could synchronize their application preferences across all their devices with highly reliable cryptography to ensure data could never be intercepted between any two paired devices.

The other great inconvenience of the social web was the number of steps necessary for users to communicate conventional web content on the social web. Users would have to copy and paste URLs between browser windows or tabs if they wished to comment or share web content. There was of course the approach of embedding social hooks into every webpage in the world one might want to share as a work-around. But that would require web developers for every web site to put in a piece of JavaScript that users could click with a button to upvote or forward content. If every developer in fact did that, it would bog down the page loads across the entire internet with lots of extraneous code that had to be loaded from public web servers. Hugely inefficient, as well as being a potential privacy risk. Turning every webpage into a hodge-podge of promotions for the then-popular social network didn’t seem like and elegant solution to Mozilla’s engineers. It seemed to be more of a web plumbing issue that could be abstracted to the browser level.

Fortunately, this was obvious to a large number of the web development community as well. So the Mozilla team along with Google's Chrome team put their heads together on a solution that we could standardize across web browsers so that every single website in the world didn’t have to code in a solution that was unique to every single social service provider. The importance of this was also accentuated when the website of the United States government integrated a social engagement platform called AddThis, that was found to be tracking the visits of people who visited the web page with tiny code snippets that the government's web developers hadn’t engineered. People generally like the idea of privacy when visiting web pages. The idea that a visit to read what the president had to say came along with a tradeoff that readers were going to be subsequently tracked off of that specific site later appeared particularly risky.

To enable a more privacy-protecting web, yet enable the convenience users sought of engaging with social utilities, Mozilla Labs borrowed a concept from the Progressive Web-App initiative coming out of the Firefox phone work. PWAs utilized the concept of a user “intent.” Just as a thermostat in a house expresses the thermostat’s setting as a “call for heat” from the house’s furnace, there needed to be a means for a user to have an “intent to share” that could be simply expressed at the interface level of the browser to avoid needing anything altered in the page. Phone manufacturers had enabled a similar concept of sharing at the operating system level for the applications that users downloaded, each with its own embedded sharing API, a browser needed to have the same capability.

Social API implementation demonstration page

To achieve this in the browser, we used the concept of a “Social API.” An API (application program interface) is a kind of digital socket or “hand-shake” protocol that can mediate between programs when the user starts a push or pull request. As an example, Thunderbird email software can interface effortlessly with an email host account (such as gmail or hotmail) using an API, if the user of the software authenticates for their program to make such requests on their behalf. It is similar to a browser extension, but it could be achieved without pushing software to the user.

Just as Firefox Accounts could sync services on behalf of a user without knowing any of the details of the accounts the user requested to sync, so too should it be able to recognize when a user wants to share something without having the user dance around between browser windows with copy and paste. 

So Mozilla explored the concept of browsers supporting “share intents” just like the vendors of phones and PCs did at the time to support the convenience of social utilities. Because many of these services had the concept of notifications, we explored the "publisher side" intents as well. The user convenience introduced by doing this meant that they didn’t have to always be logged into their social media accounts in order to be notified if something required their attention or to receive an inbound message.

At the time, there was a highly-marketed trend in Silicon Valley around the idea of “gamification” in mobile apps. This was a concept that web developers could give you points and rewards to try to drive loyalty and return visits among web users. Notifications were heralded by some as a great way to drive the sense of delight for visitors of your website along with the opportunity to lure them back for more of whatever you offered. We wondered, “Would developers over-notify?” to try to drive traffic to their site at an attention burden cost to the user. There was a potential for over-saturation and distraction of user attention which could be a worse cost to the user’s attention and time than it was a benefit for them.

Fortunately, we did not see huge notification abuses from the sites that supported Social API. We did receive widespread interest from the likes of Facebook, Twitter, Yahoo, Google which were the major messaging service providers of the day. And so we jointly worked to up-level this to the web standards body called the World Wide Web Consortium (abbreviated as the W3C) for promotion outside the Firefox ecosystem, so that it could be used across all web browsers which supported W3C standards.

Working with this team I learned a great deal from my peers in the engineering organization. First I thought, if this is such a great idea, why doesn’t Firefox try to make this a unique selling point of our software?  What’s the rush to standardize this? Jiminy Cricket voices across the organization pointed out that the goal of our implementation of open source code in the browser is precisely to have others adopt the greatest ideas and innovate upon them rather than to hold them dear. The purpose of the standards organizations we worked with was specifically to pass on those innovations so that everyone else could utilize them without having to adopt Firefox-specific code. Good ideas, like the USPTO’s concept of eventual dissemination to the broader global community, are meant to spread to the entire ecosystem so that webmasters avoid the pitfall of coding their website to the functionality of a single piece of software or web browser.  Mozilla engineers saw their mission as in part being to champion web-compatibility. (Often shortened to webcompat in our discussions.) Firefox is a massive population of addressable users. But we want web developers to code for all users to have a consistently great experience of the web, not just our audience of users. There is a broad group of engineers across Microsoft, Google, Apple, Samsung, Mozilla and many small software developers who lay down the flags of their respective companies and band together in the standards bodies to dream of a future internet beyond the capability of the software and web we have today. They do this with a sense of commitment to the future we are creating for the next generation of internet, software and hardware developers who are going to follow our footsteps after us. Just as we inherited code, process and standards from our forebears, it is the yoke of our current responsibility to pass on the baton without being hampered by the partisanship of our competing companies. We have to build the web today for future generations which are going to be set up for success in facing new demands of the technology environment we will create for tomorrow. 

Over the course of the Social API project, Firefox, Chrome and the new Edge browser of Microsoft would implement different models of addressing the use case of the target issue we were solving for. Over time, we agreed to ways that the tools could be standardized and we then upgraded our software to perform in a consistent way between our products on different operating systems.

During the early craze about social media, there were a lot of critics about the ease of sharing, sometimes potentially private information, for users that were perhaps unfamiliar with the risks of privacy exposure on such platforms. Facebook had been commissioning studies on the psychological benefits and pitfalls of social media use. During one of our company all-hands I posed this issue to one of my mentors, a Mozillian engineer named Shane Caraveo. “Shouldn’t we be championing people  to go off these social platforms to build their own web pages instead of facilitating greater usage of the conveniences of social tools?” Shane pointed out that Mozilla does do that through the educational tools on the Mozilla Developer Network, which demonstrates with code examples exactly how to build your own website.  Then Shane made a comment that has sat with me for years after. “I don’t care how people create content and share it on the internet.  I care that they do.”

The First Acquisition

The standardization of web notifications across browsers one of the big wins of our project. The other, for Mozilla specifically, was the acquisition of the Pocket content aggregation and recommendation engine. When I worked at Yahoo, one of the first acquisitions they had made was the bookmark backup and sharing service del.icio.us. Our Yahoo team had seen the spread of the social web-sharing trend as one of the greatest new potentials for web publishers to disseminate their creations and content. They had built their own social networks called Yahoo360 and Yahoo Answers and had acquired other social sharing platforms including upcoming.org and flickr.com during the same period. These social sharing vectors bolstered the visibility of web content by earning the praise and  inspiring the subsequent desire to “re-post” content among circles of friends. Many years later, Yahoo sold the cloud bookmarking business to the founder of YouTube who sought to rekindle the idea, and created a SocialAPI socket for Firefox. 

Another entrepreneur, Nate Weiner, had taken a different approach to addressing the web-archiving need of his product's users. His service, called Pocket, just allowed the quick-archiving and labeling of web content in a user's personal account. Pocket users would have their own collection of web content with them for the future. Nate built browser extensions just like del.icio.us and also implemented SocialAPI. But its popularity seemed to hinge on its mobile and tablet apps. 

Saving web content may seem like a particularly fringe use case for only the most avid web users. But the Pocket service received considerable demand. With funding from Google’s venture investing arm among others, Nate was able to grow Pocket considerably, and even expand into a destination website where users could browse the recommendations of saved content from other users in a small tight-knit group of avid curator/users. It was perhaps the decentralization of Pocket’s approach that made it work so well. The article saving behaviors of millions of Pocket users produced a list of high quality content that was in no way influenced by the marketing strategies of the news publishing industry. It served as a kind of barometer of popular opinion about what was trending across the web. But not just for what was popular, rather what people wanted to save for their own reference later.

When I first met the Pocket team, they commented that their platform was not inherently social. So the constraints of the Social API architecture didn’t fit the needs of their users. They suggested that we create a separate concept around “save intents” that were not fitting in the constraints of social media intents that the phones and services were pursuing at the time. When Firefox introduced the “Save to Pocket” function in our own browser chrome, it seemed to be a combination of the concepts of “Save to Bookmarks” (browser client side storage) plus Firefox Accounts “Sync” feature. But we found that a tremendous number of users were keen on Pocket save function even more than the sync-bookmarks architecture we already had in the browser.

Because Google has already invested in Pocket, I had thought that it was more likely that they would join the Chrome team eventually. But by a stroke of good fortune, the Pocket team had a very good experience with working alongside the Mozilla team and decided that they preferred to join Mozilla to pursue the growth of their web services. This was the first acquisition Mozilla had executed. Because I had seen how acquisition integrations sometimes fared in Silicon Valley, I had some fascination to see how Mozilla would operate another company with its own unique culture. 

Mozilla post-acquisition used the Pocket platform as a content sharing and discovery tool within the new tab portion of the browser so that when Firefox users spawned a new page, they could see recommendations from the peer community of Firefox users much in a similar fashion to how the original Netscape browser featured recommended websites in its Open Directory, curated by Netscape users 20 years prior. Mozilla didn’t discontinue Pocket initiatives outside of Firefox after the acquisition and integration. Pocket continues to support all browsers that compete with Firefox still today. The community of Pocket users and contributors continues to stay robust and active even beyond the existing Firefox user base.

Emerging Technologies

Mozilla Hubs VR development platform

Mozilla's Emerging Technology team evaluates new opportunities that web technology can be applied to next-generation challenges. They do this so that the web remains a preferred option for developing future tools that might otherwise be app based. As an example, as developers begin to build 3D Virtual Reality content experiences, Mozilla’s engineers realized that there needed to be an expanded means for 3D content to be served on the web without needing complex software downloads. So they created a means of using JavaScript to embed 3D graphics inside of web pages or serve 3D content rendered inside the browser itself, through an initiative called aframe.io. They also launched a web development utility called Hubs for first-time developers to become familiar with building 3D spatial and audio landscapes inside a simple drag-and-drop dashboard tool, complete with free hosting for developers who wished to host small events online in the simulated 3D chat rooms. For larger events such as the Augmented World Expo and the Mozilla all-hands meetings, they built out conference-sized deployment capability for the Hubs platform and would host their own developer events in the virtual space as well.

Aside from visual aspects of the expanding internet, the trend of using voice-based robot assistants began to gain momentum as Apple, Google and Amazon each launched their own audio concierge tools. (They were Siri, Hey Google and Alexa respectively.) Mozilla has a large group of engineers who specifically work on tools for content-accessibility for people who have sight or hearing impairments. So expanding the ability for web services to read content to users without a screen, or to listen to user prompts, became an area of central focus for Mozilla Emerging Technology team.

Andre Natal presents on Voice APIs at Mozilla
During my time in Brazil launching the FirefoxOS phone, I attended a conference called FISL which brought together hundreds of developers working in open source projects. One of the engineers there presented an app that he'd built which could give speech based directions to anyone by using spoken questions. I asked him to consider adapting his web service for FirefoxOS phones for navigation. Going beyond just doing that, Andre decided to join our Emerging Technologies team, where he developed the first "Voice Search" extension for Firefox desktop users.

"Read Page" built into Firefox Reader Mode

 

Parallel to listening to voice commands in Mozilla's products, Mozilla was actively working on allowing the Firefox browser and Pocket apps to read narrative text to the user. In Firefox, all the user has to do is click into "Reader Mode" in the toolbar to see the speech prompt at the left of the browser frame. There a user can select from various spoken accents that you wished Firefox to use in the narration.


Mozilla's Common Voice active-reading donation page

If a browser can read and listen, what do you do for an encore? Mozilla's answer to this was to make sure that developers could access raw open source audio and speech processing algorithms to make their own speech tools. As Mozilla had a large base of contributors who were excited to donate samples of their accents and speech style to the project, they created a portal where people from around the world could read sample text aloud, then to be validated for accuracy of intelligibility to another contributor of the same language. The Common Voice audio sample set is now one of the largest open source voice sample databases in the world.


Mozilla WebThings Gateway distributed by OKdo

With the advent of new internet-ready hardware devices in the home, Mozilla saw a new opportunity to leverage the privacy concepts of web enabled device API from the FirefoxOS era to provide their own in-home router tools. This tool could act as a secure gateway to those devices instead of having each device connect independently over the web to sundry service vendors. The Emerging Technology team borrowed many of the concepts that stemmed from the PWA initiative to allow a person's home computer to serve as the primary interface for the dozens of home appliances a person might have. (WebThings Gateway could control lamps, speakers, TVs, doorbells and webcams.) This avoided the need to have dozens of separate applications to control the many different utilities you might have at home and made sure access to them was “fire-walled" from outside control. The benefit of doing this was that many of these devices were incompatible with each other. So if we were to use a common command language, such as  http to communicate between the router and the devices we could enable coordinated behaviors for linked devices even if they were manufactured by different companies or utilized incompatible radio frequencies with the other devices in the room. Mozilla WebThings Gateway was distributed by original equipment manufacturer OKdo as kits that could be launched out of the box and configured in the user’s web browser. 


Protection of Anonymity

One of the most fascinating industry-wide efforts I saw at Mozilla was the campaign behind protecting user anonymity requests and initiatives to enable pseudonymity for users. (Using a pseudonym means calling my website “ncubeeight” instead of “Christopher’s Page” for instance. The way Mark Twain authored books under his nom de plume rather than his birth name.) As social networking services proliferated in the Web 2.0 era, there were several mainstream services that sought to force users into a web experience where they could have only one single, externally verified, web identity.  The policy was lambasted in the web community as a form of censorship, where internet authors were blocked from using pen-names and aliases.

On the flip side of the argument, proponents of the real-name policy theorized that anonymity of web identities led to trolling behaviors in social media, where people would be publicly criticized by anonymous voices who could avoid reputational repercussions. This would, in theory, let those anonymous voices say things about others that were not constrained by the normal ethical decency pressures of daily society.

Wired magazine wrote editorial columns against real names policies saying that users turn to the web to be whomever they want to be and express anonymously ideas that they couldn't without multiple pen-names. A person’s web identity (Sometimes referred to as “handles” from the early CB radio practice of using declared identities in radio transmissions) would allow them to be more creative than they otherwise would. One opinion piece suggested that the web is where people go to be a Humpty Dumpty assortment of diverse identities, not to be corralled together as a single source of identity. I myself had used multiple handles for my web pages. I wanted my music hobby websites, photography website and business websites to all be distinct. In part, I didn’t want business inquiries to be routed to my music website. And I didn’t want my avocation to get tangled with my business either.

European governments jumped in to legislate the preservation of anonymity with laws referred to as “Right to be forgotten” which would force internet publishers to take down content if a user requested it. In a world where content was already fragmented in a means detached from the initial author, how could any web publisher comply with individual requests for censorship? It wasn’t part of the web protocol to disambiguate names across the broader internet. So reputation policing in a decentralized content publishing ecosystem proved tremendously complicated for web content hosts.

Mozilla championed investigations, such as the Coral Project, to address the specific problems of internet trolling when it was targeted to public commenting platforms on news sites. But as a relatively small player in the broader market, it would have been challenging to address a behavioral problem with open source code. A broader issue was looming as a threat to Mozilla’s guiding principles. The emergence of behaviorally-targeted advertising that spanned across websites loomed as a significant threat to internet users’ right to privacy.

The founders of Mozilla had decided to pen a manifesto of principles that they established to keep as the guiding framework for how they would govern projects that they intended to sponsor in the early days of the non-profit. (The full manifesto can be read here: https://www.mozilla.org/en-US/about/manifesto/) In general, the developers of web software have the specific interests of their end users at the forefront of their minds as their guiding light. They woo customers to their services and compete with other developers and products by introducing new utilities that contribute to the convenience and delight of their users. But sometimes companies that make the core services we rely on have to outsource some of the work they do to bring the services to us. With advertising, this became a slippery slope of outsourcing. The advertising ecosystem’s evolution in the face of the Web 2.0 emergence, and the trade-offs publishers were making with regard to end-user privacy, became too extreme for Mozilla’s comfort. Many outside Mozilla also believed the compromises in privacy that were being made were unacceptable from the standpoint of the assurances they wanted to offer their end-users. They were willing to band together with Mozilla to do something about it.

While this is a sensitive subject that raises ire for many people, I can sympathize with the motivations of the various complicit parties that contributed to the problem. As a web publisher myself, I had to think a lot about how I wanted to bring my interesting content to my audience. Web hosting cost increases with the volume of audience you wish to entertain. The more people who read and streamed my articles, pictures, music and video content, the more I would have to pay each month to keep them happy and to keep the web servers running. All free web hosting services came with compromises. So, eventually I decided to pay my own server fees and incorporate advertising to offset those fees.

Deciding to post advertising on your website is a concession to give up control. If you utilize an ad network with dynamic ad targeting, the advertising platform makes the decision of what goods or services show up on your web pages. When I was writing about drum traditions from around the world, advertisers may think my website was about oil drums, and it would show ads for steel barrels on my website. As a web publisher, I winced. Oil barrels aren’t actually relevant to the people who read about African drums. But it paid the bills, so I tolerated it. And I thought my site visitors would forgive the inconvenience of seeing oil barrels next to my drums.

I was working at Yahoo when the professed boon of behavioral advertising swept through the industry. Instead of serving semantically derived keyword-matched ads for my drum web page, suddenly I could allow the last webpage you visited to buy “re-targeting” ads on my webpage to continue a more personally relevant experience for you, replacing those oil barrel ads with offers from sites that had been relevant to you in your personal journey yesterday, regardless of what my website was about. This did result in the unsightly side effect that products you purchased on an ecommerce site would follow you around for months. But, it paid the bills. And it paid better than the mis-targeted ads. So more webmasters started doing it.

Behaviorally targeted ads seemed like a slight improvement in a generally under-appreciated industry at the start. But because it worked so well, significant investment demand spurred ever more refined targeting platforms in the advertising technology industry. Internet users became increasingly uncomfortable with what they perceived as pervasive intrusions of their privacy. Early on, I remember thinking, “They’re not targeting me, they’re targeting people like me.”  Because the ad targeting was approximate, not personal. I wasn’t overly concerned.

One day at Yahoo, I received a call. It had been escalated though their customer support channels as a potential product issue. As I was the responsible director in the product channel, they asked me if I would talk to the customer. Usually, business directors don’t do customer support directly. But as nobody was available to field the call, I did. The customer was actually receiving inappropriate advertising in their browser. It had nothing to do with a Yahoo hosted page which has filters for such advertising. But it was caused by a tracking cookie that the user, or someone who had used the user’s computer, had acquired in a previous browsing session. I instructed the user how to clear their cookie store on their browser, which was not a Yahoo browser either, and the problem resolved. This experience made me take to heart how deeply seriously people fear the perceived invasions of privacy from internet platforms. The source of the problem had not been related to my company. But this person had nobody to turn to to explain how web pages work. And considering how rapidly the internet emerged, it dawned on me that many people who’ve experienced the internet’s emergence in their lifetime likely couldn’t have had a mentor or teacher tell them about how these technologies worked.

Mozilla's Manifesto of 10 principles
Journalists started to uncover some very unsettling stories about how ad targeting can actually become directly personal. Coupon offers on printed store receipts were revealing customers purchase behaviors which could highlight details of their personal life and even their health.  Mozilla’s principle #4 argued that “Individuals’ security and privacy on the internet are fundamental and must not be treated as optional.” They decided to tackle the ills of personal data tracking on the web with the concept of a declaration that would be sent as part of the page-load “header” request. This is the “hand shake” process that a web browser does with a website on first page load. Mozilla asserted that if the tracking preferences of the user were declared up front, then it was clear to the site host how the user wanted advertising or tracking customized to that preference. And because all browsers use headers, this was a solution that could be implemented across all browsers in a common and transparent fashion.

Most savvy web users do know what browser cookies are and where to find them, and how to clear them if needed. But one of our security engineers pointed out to me that we don’t want our customers to always be chasing down errant irritating cookies and flushing their browser history compulsively. This was friction, noise and inconvenience that the web was creating for the web’s primary beneficiaries. The web browser, as the user’s delegated agent, should be able to handle these irritations without wasting time of their customers, causing them to hunt down pesky plumbing issues in the preference settings of the software. The major browser makers banded with Mozilla to try to eradicate this ill.

At first it started with a very simple tactic. The browser cookie had been invented as a convenience for navigation purposes. If you visited a page and wanted to navigate back to it, there should be a “back button” that lets you find it without having to conduct another search. This was the need the cookie solved. Every web page you visit sets a cookie if they need to offer you some form of customization. Subsequently, advertisers viewed a visit to their webpage as a kind of consent to be cookied, even if the visit happened inside a browser frame, called an inline frame (iframe). You visited Amazon previously, surely you’d want to come back, they assumed. There should be a kind of explicit statement of trust which had been described as an "opt-in" even though a visit to a web destination was in no way a contract between a user and a host. Session history seemed like a good implied vector to define trust. Except that not all elements of a web page are served from a single source. Single origin was a very Web 1.0 concept. Dynamically aggregated web pages pulled content, code and cookies from dozens of sources in a single page load in the modern web environment.

The environment of trust was deemed to be the 1st-party relationship between a site a user visits in their web browser and the browser “cookie cache.” This cache served as a temporary passive client-side history notation that could be used in short time frames for the browser to have a kind of bread crumb trail of sequential visits. (In case you want to go back to a previously visited URL that you hadn’t saved to bookmarks for instance.) Cookies and other history tracking elements could be served in iframe windows of the webpage. the portion of the web page that web designers “outsource” to external content calls. When cookies were sent to a browser cache in the passive iframe window, that wasn’t controlled by the site host, Firefox stored those with a notation attribute that they came from outside the 1st-party context of the site the user explicitly navigated to.

In the browser industry, Firefox and Safari teams wanted to quarantine and forcibly limit what cookies in this “3rd party context” could do. So we created policies that would limit the time such cookies could stay active after their first setting. We also introduced a feature in the browser that users could turn off the ability for certain sites to set 3rd party cookies at all. While this was controversial at first, Mozilla engaged with the developer and advertising technology companies to come up with alternative means to customize advertising that did not involve excessive dependencies on dropping cookies that could annoy the user.

Browser makers tended to standardize the code handling of web content across their separate platforms by the W3C or other working groups.  And in order to create a standard, there had to be a reference architecture that multiple companies could implement and test. The first attempt at this was called “Do not track” or DNT for short. The DNT preference in Firefox, or any other browser, would be sent to each website when the first load happens. This seemed innocuous enough. It allowed the page host to remember the session as long as necessary to complete the session. Most viewed that the DNT setting in a browser was a simple enough statement of the trust environment between a web publisher and a visitor for the purpose of daily browser usage.

Harvey Anderson testifies on DNT support across browsers in US Senate
All the major browser vendors addressed the concern of the government supervision with the concept that they should self-regulate. Meaning, they should come to some general consensus that could be used across the industry between browser, publishers and advertisers on how to best serve people using their products and services without having to have government legislators mandate how code should be written or operate. Oddly, it didn’t work so well. Eventually, certain advertisers decided to not honor the DNT header request. US Congress invited Mozilla general counsel, Harvey Anderson, to discuss what was happening and why some browsers and advertising companies decided to ignore the user preferences as stated by our shared code in browser headers.

Mozilla Lightbeam browser cookie affiliation tracker
Because the tracking utilities sites were using to track browser users, Mozilla sponsored a "Lightbeam" extension to let the user see all the trackers and relationships between the entities that were utilizing those trackers. You could see which sites would drop a targeting cookie in a single web session, and thereafter which other companies accessed that cookie in their own web page serving to the same browser. (Note this doesn't track the user, but rather the session-consistency across two different page loads.) Our efforts to work in open source via DNT with the other industry parties was not ultimately protecting the users from belligerent tracking. It resulted in a whack-a-mole issue of what we referred to as "finger printing" where advertising companies were re-targeting off of computer or phone hardware aspects or even off the preference not to be tracked itself! It was a bit preposterous to watch this happen across the industry and to hear the explanations by those doing it. What was very inspiring to watch on the other side was the efforts of the Mozilla product, policy and legal teams to push this concern to the fore without asking for legislative intervention. Ultimately, European and US regulators did decide to step in to create legal frameworks to punitively address breaches of user privacy that were enabled by the technology of an intermediary. Even after the launch of the European GDPR regulatory framework, the ensuing scandals around lax handling of user private data in internet services is now very widely publicized and at the forefront of technology discussions and education.

Now the broader consumer industry is keenly aware of the privacy risks that can surface on the web. Open source web browsers are proving to be one of the best ways to protect users against risks they might encounter, through the sharing of best-practices and open policy and governance that is transparent across the ecosystem. Now dozens of browsers leverage the core rendering engines of WebKit, Chromium and Gecko as part of their underlying code. So should any new vulnerability be discovered, in any of them, all of the contributors to the various rendering engines can patch the vulnerability and pass the improved code down through heirs downstream.

How They Do It

You may be asking yourself how companies like Google, Apple and Mozilla develop their browsers in such a way that anyone can help fix them. The important part is openness. Just like the US Patent and Trademark office, there is a built in mechanism that the innovations of today get passed on to the developers who will iterate them tomorrow. That which doesn't change, doesn't grow. So if we have thousands of people contributing code to fix our products, they will get better based on the broad base of support.

Working in the open was part of the original strategy AOL had when they open sourced Netscape. If they could get other companies to build together with them, the collaborative work of contributors outside the AOL payroll could contribute to the direct benefit of the browser team inside AOL. Bugzilla was structured as a hierarchy of nodes, where a node owner could prioritize external contributions to the code base and commit them to be included in the derivative build which would be scheduled to be released as a new update package every few months.

Module Owners, as they were called, would evaluate candidate fixes or new features against their own list of items to triage in terms of product feature requests or complaints from their own team. The main team that shipped each version was called Release Engineering. They cared less about the individual features being worked on than the overall function of the broader software package. So they would bundle up a version of the then-current software that they would call a Nightly build, as there were builds being assembled each day as new bugs were up-leveled and committed to the software tree. Release engineering would watch for conflicts between software patches and annotate them in Bugzilla so that the various module owners could look for conflicts that their code commits were causing in other portions of the code base.

Christopher presenting on launch of Social API in public Air Mozilla
While our browser is open source, and anyone can submit a candidate patch for it, we also need to show people how we work and why. So Mozilla has weekly meetings that are open to the public where we highlight all that we are working on. We call our communications platform Air Mozilla, and we host dozens of coding hackathons each week so developers can learn how to code.  It doesn't matter what you do in open source if you don't publicize it! Asa Dotzler, our venerable host of these weekly calls once made a funny quip to me. Someone wanted to open source something to "give" to Mozilla. He said, that was indeed an admirable offer. But, he said, open sourcing isn't just about giving. It's about maintaining. So after a piece of software is made open you have to help the people who want to work with and iterate on it. If it's just open sourced, but not supported it is likely to die. That's why Webkit, Chromium and Gecko thrive. Apple, Google and Mozilla pay special care to the reviewing and iterating of the code that comes in as contributions from the outside.

Over the years Mozilla has evolved the ways that people of all backgrounds and in all locations can contribute. The simplest way to contribute is to be a user of the software and to make comments on Support.mozilla.org if you ever have a problem, the user forums for Q&A about the products. For those who are willing to roll up their sleeves and dive into some specifics of product issues and their function, there is bugzilla.mozilla.org where we actually comment on issues in the software and allocate resources to resolve them. The various internet policy and advocacy issues are detailed on mozilla.org along with multiple ways people can get involved in the mission. But even passive use of the products contributes to their refinement as the Mozilla software has automatic program health utilities that can report system issues that are encountered by many users, entirely anonymously. So just by surfing the web and reporting any crashes through the automated reporting helps Safari, Chrome and Firefox get better.

How Partners Help

After Mozilla had acquired Pocket, they asked me to relocate to Germany to prospect for new partnerships across Europe. Germany has been the largest geographical distribution of Mozilla software outside of North America. Having studied German extensively in high school and college, I was excited to face this challenge. My wife and I relocated to Berlin, where we decided to live in the heart of the former East Berlin. We found an apartment close to the Fernsehturm (literally Far-seeing Tower, mundanely called TV tower in English) that rises triumphantly above the small isle where Berlin was founded over 800 years ago. 

In Germany, more people download Firefox than all the people who use the browser included as the default on their personal computers. Why should this be? I was very enthusiastic to find out. Over months I interviewed people about their preferences, world view and fascinations. Put simplistically, many people told me that Germans really like choice and that they are skeptical of anything that is handed to them with the implication of convenience. So they may really like their Windows PC. But they want to get their software from somewhere else, not the company that powers their PC. This attitude isn’t an anti-authoritarian phobia that Microsoft has aspirations to consolidate their user data. It was rather an idea that where a choice is given, Germans relish that choice.

Beyond the stated preferences for self-determined future, I heard common references to the idea that our German customers liked the fact that Mozilla was non-profit and that the software was open-source and therefore entirely subject to peer-review for any additions or subtractions Mozilla decided to make to the software at a later date.

There are other factors beyond the casual commentary I’d heard that likely influenced the perspectives of the people I’d talked to. The percentage of the German population who subscribe to newspapers and journalistic magazines is higher than many countries in Europe and far higher than North America. I suspect that media coverage of internet-based malware vulnerabilities is heightened in Germany to the extent that many of the web participants there take special care about their digital hygiene. Specifically the German ministry for security in information technology (Abbreviated as “BSI” as abbreviation for Bundesamt für Sicherheit in der Informationstechnik more at https://www.bsi.bund.de) has been very articulate in publicizing its insights about malware/phishing risks, which are then amplified by the German press. This is not to say that any web browser these days is particularly better or worse than another. (Edge and Chrome are based on Blink/Chromium, Telekom Browser and Firefox is based on Gecko, Safari and dozens of mobile apps are based on Webkit. All are modern browsers with active bug fixing, zero-day response teams and are open source.) But in a marketplace where there is heightened attention on all aspects of the industry from shareholders to business models, we can expect a tremendous diversity of user choice to be expressed. The result of user choice in such an arena isn’t something that is swayed by a majority view, nor the company with the best marketing, nor the company with the best operating system bundling. Absolutely everything is put to a vote every day. 

As an American, I would say that I’m open to promotions. I’d be inclined to take offers on face value. I may even be more fickle and transitory in my decisions as a consumer on the internet. But from what I have heard from my discussions with my customers, partners and end users in Germany, I’d say they are the opposite. Generalizing a bit too far perhaps, I believe that the German audience is discerning, reluctant to shift behaviors, wary of any offer or promotion and more likely to go by word-of-mouth or journalistic recommendations than the typical American user. 

Mozilla had tremendous momentum in Germany prior to my arrival of course. Looking back across the decades, word of mouth around the emergence of Mozilla from Netscape may have played a partial role, along with endorsements from the BSI for their open-source and privacy reputation. But thereafter, leading internet service providers, Deutsche Telekom, 1&1 and leading portal web.de started recommending Gecko/Firefox as an alternative to IE and Safari. This somewhat underscores the value of partnerships in Germany. I wouldn’t say that these companies needed to be particularly rewarded in order to offer choices to their own customers. If they had said that their services were only available in IE and Safari, many of their customers might have been skeptical and might have switched away. But the fact that they offered to support their users in IE and Firefox, led to a lot of people going with the “underdog” non-profit instead of the browser that had been served to them as a bundled part of their operating system.
Deutsche Telekom’s version of the Gecko browser didn’t even leverage the Netscape, nor Firefox brand. For them, a skinned version of Gecko, combined with certain features they added onto the open source browser was enough to get the user to want to download the customized alternative. 

Partnerships are great for brand-halo if they confer trust where there  is otherwise no established relationship. But even when there is no inherent brand value, trust can still be earned. When I moved to Berlin, there were a couple of websites experimenting with Mozilla’s “Save to Pocket” button. By the time I left Berlin, there were 16 major publishers which integrated our tools to recommend content to Firefox users. My tactics in promoting the tools was never incentivized with commercial relationship. Rather it was by open and transparent communication of the benefits of the product. It was not by my efficacy as a salesman that Pocket earned its reputation in Germany. I knew from my lessons in the culture that excessive sales approach wouldn’t have worked anyway. But through transparency and by proving tremendous reliability, over time, I and my team could win trust. Trust and transparency is the basis of everything in the internet. As I found in Japan and Germany, diligent effort on behalf of the user via their service providers can win the day for a middleware company, be it a search engine or a browser.

As a business developer, it has been very easy for me to talk about Mozilla’s software. The parties on the other side of the table could always take the software I was pitching and run with it using their own brand if they’d wanted to. It was nice to have a pitch that ended with a punch line of “succeed with us and help us enhance the product jointly, or do it yourself on our code and succeed without us.” There has never been a need for Mozilla to be defensive with its innovations because it was set up for generosity. 

As many companies have benefited by forking Mozilla code as have sought partnerships directly. Beyond business interest that is tied into brands, budgets and marketing strategies, Mozilla does a far greater good beyond its own scope of interest. Open source developers and their sponsors are achieving a broader good that is beyond the interests of any of them individually. By giving open source and maintaining an active dialog in the open about its future, we are giving the inspiration to developers and web-creators across the world to make the next generation of web stronger and more adaptive to the needs of the market in a way that is beyond vested interests and shareholder benefit. We are collaboratively building a platform that is mutable and extensible, to be continued through the visions and inspirations of the community we inspire.

Passing the baton
It has been amazing to watch Netscape morph from my favorite browser when I started my journey with the internet 25 years ago to becoming a platform that inspires hundreds of millions of people today. And just as the internet is a system of discreet nodes that can stay resilient and independent from other nodes, so the open source ecosystem itself is now a plurality of collaborative (though competitive) entities who jointly defend the network against vulnerabilities never needing to rely solely on one entity. As it was envisioned for Netscape’s future two decades ago, so it is now across a broad pool of international developers who are focusing on the future of the utility and flexibility of the web.

Some might wonder, “Wouldn’t it be great if everyone just used the same software?” It might seem that that would address a lot of the site compatibility problems web developers run into. But think back to the concept of the competitive ecosystem the governments are so keen on. Competition is cumbersome and inefficient to a great extent. But the more people who are picking up the code and trying something new and innovative with it, the better ultimately the result for the broader consumer base.

Mozilla was spawned out of the desire to create exactly that open and competitive ecosystem that would sustain the open internet beyond the influence of any single player or contributor. (Beyond even themselves.) Now that there are several competing open source initiatives, we have less to worry about any single one of them. As the internet itself was designed as a series of independent nodes that could function with or without others in the network, so is it with the competitive ecosystem of software developers fostering what we now have of this amazing internet. Mozilla has seemed to be somewhat of a diaspora. It’s former contributors are now spread across almost all technology firms around the world.

Sometimes I go to the redwood forests arount San Francisco bay area. There, you’ll often see trees growing in a circle called a "fairy ring." Redwoods can reproduce by creating burls that protrude from their trunks. If these fall to the ground, they become another tree. So the rings of Redwood trees that one sees is just the echo of a center. Mozilla never aspired to be the largest player. (From what I heard at least.) It was to be an open source example of its ideals, which were its burls. It is a lab, a Petri dish for experimentation and a reference point that expresses the developers' ideals in code. 

Looking back over the decades of my career, it has been awe inspiring to see what lasts and what doesn’t. Some of the deals I’ve worked on have spanned a decade. But that’s somewhat rare in this industry. The best we can aspire to is to contribute some small part that makes the broader tools and market better for the furtherance of a competitive and open ecosystem that allows new entrants to contribute what they may to advance the work of the previous generation.

The Firefox Monument on Embaracadero
When Mozilla opened its first San Francisco office, they decided to put a monument to Firefox on the main road near the western span of the Bay Bridge. This monument has a large 3-dimensional globe representing the Earth with an orange fox sheltering the Earth. Mozilla inscribed the names of all its employees and contributors along the sides of the monument with a bold inscription “Doing good is part of our code.” Over my years in the internet industry I’d received various awards with my name printed on it that I could put on my home shelf to inspire memories and pride. But I’d never had my name placed on a public monument like this before. Walking past it always makes me feel proud for my association with this amazing group of people.

2020 brings a global viral pandemic that has taken millions of lives. As governments shut down their local economies to stem the spread of the pandemic, all communities are being impacted in sundry ways. No part of our world is untouched by this. Mozilla, along with thousands of other businesses, is feeling the impact and has had to reduce its staff in order to ensure that it can persist into the future to stay true to the values we all support.

It is always compelling and rewarding to see your efforts reflected in the success of others. It’s my time now to pass on the baton with pride, knowing the race will go on. The wonderful thing about open source is that it can live on beyond just what one company or a team of individuals can contribute to it. I’ve seen time and time again how companies unrelated to Mozilla can pick up the code and run with it, saving their developers time and effort that would otherwise have to be spent building from scratch. AOL had cast 1000 seeds when they gave up Netscape to the community. It has spawned an industry of amazing competitive collaboration without needing to be dreamed up by legislation of governments.

There is a parable in Japan about the knife that gets sharper with use rather than dulling. Open source is the closest to that allegory that I’ve seen. It’s been fascinating to be a part of it! Open source is a tool that is honed by the earnest endeavor of thousands of people sharing their creativity toward a common goal. And because of its transparency, it cannot be maliciously encumbered without the community being able to see and react.

Thank you to the teams of Mozillians who’ve inspired me from the beginning!