Sunday, May 31, 2020

Money, friction and momentum on the web

Back when I first moved to Silicon Valley in 1998, I tried to understand how capital markets made the valley such a unique place for inventors and entrepreneurs.  Corporate stocks, real estate, international currency and commodities markets were concepts I was well familiar with from my time working at a financial news service in the nation's capital in the mid 1990's.  However, crowdfunding and angel investing were new concepts to me 20 years ago.  Crowdfunding platforms seemed to be more to the advantage of the funding recipient than the balanced two-sided exchanges of the commercial financial system.  I often wondered what motivated generosity-driven models that was different from reward-driven sponsorships.

When trying to grasp the way angel investors think about entrepreneurship, my friend Willy, a serial entrepreneur and investor, said: “If you want to see something succeed, throw money at it!”  The idea behind the "angel" is that they are the riskiest of risk-capital.  Angel investors join the startup funding before banks and venture capital firms.  They seldom get payback in kind from the companies they sponsor and invest in.  Angels are the lenders of first-resort for founders because they tend to be more generous, more flexible and more forgiving than lenders.  They value the potential success of the venture far more than they value the money they put forth.  And the contributions of an angel investor can have an outsized benefit in the early stage of an initiative by sustaining the founder/creator at their most vulnerable stage.  But what is this essence they get out of it that is worth more than money to them?

Over the course of the last couple of decades I've become a part of the crowdfunding throng of inventors and sponsors.  I have contributed to small business projects on Kiva in over 30 countries, and backed many small-scale projects across Kickstarter, Indiegogo and Appbackr.  I've also been on the receiving side, having the chance to pitch my company for funding on Sand Hill Road, the strip of financial lending firms that populate Palo Alto's hillsides.  As a funder, it has been very enlightening to know that I can be part of someone else's project by chipping in time, sharing insights and capital to get improbable projects off the ground.  And the excitement of following the path of the entrepreneurs has been the greatest reward.  As a founder, I remember framing the potential of a future that, if funded, would yield significant returns to the lenders and shareholders.  Of course, the majority of new ventures do not come to market in the form of their initial invention.  Some of the projects I participated in have launched commercially and I've been able to benefit.  (By getting shares in a growing venture or by getting nifty gadgets and software as part of the pre-release test audience.)  But those things aren't the reward I was seeking when I signed up.  It was the energy of participating in the innovation process and the excitement about making a difference.  After many years of working in the corporate world, I became hooked on the idea of working with engineers and developers who are bringing about the next generation of our expressive/experiential platforms of the web.

During the Augmented World Expo in May, I attended a conference session called "Web Monetization and Social Signaling," hosted by Anselm Hook, a researcher at the web development non-profit Mozilla, where I also work.  He made an interesting assertion during his presentation, "Money appears to be a form of communication."  His study was observing platform-integrated social signals (such as up-voting, re-tweeting and applauding with hand-clapping emojis) to draw attention to content users had discovered on the web, in this case within the content recommendation platform of the Firefox Reality VR web browser.  There are multiple motivations and benefits for this kind of social signaling.  It serves as a bookmarking method for the user, it increases the content's visibility to friends who might also like the content, it signals affinity with the content as part of one's own identity and it gives reinforcement to the content/comment provider.  Anselm found in his research that participants actually reacted more strongly when they believed their action contributed financial benefit directly to the other participant.  Meaning, we don't just want to use emojis to make each other feel good about their web artistry.  In some cases, we want to cause profit for the artist/developer directly.  Perhaps a gesture of a smiley-face or a thumb is adequate to assuage our desire to give big-ups to an artist, and we can feel like our karmic balance book is settled.  But what if we want to do more than foist colored pixels on each other?  Could the web do more to allow us to financially sustain the artist wizards behind the curtain?  Can we "tip" the way we do our favorite street musicians?  Not conveniently, because the systems we have now mostly rely on the credit card.  But in the offline context, do we interrupt a street busker to ask for their Venmo or Paypal account?  We typically use cash, which has only rough analogues as of yet in our digital lives.

When I lived in Washington DC, I had the privilege to see the great Qawwali master Nusrat Fateh Ali Khan in concert.  Qawwali is a style of inspired Sufi mystical chant combined with call-and-response singing with a backup ensemble.  Listening for hours as his incantations built from quiet mutterings accompanied by harmonium and slow paced drums to a crescendo of shouts and wails of devotion at the culmination of his songs was very transporting in spite of my dissimilar cultural upbringing and language.  What surprised me, beyond the amazing performance of course, was that as the concert progressed people in the audience would get up, dance and then hurl money at the stage.  "This is supposed to be a devotional setting isn't it?  Hurling cash at the musicians seems so profane," I thought.  But apparently this is something that one does at concerts in Pakistan.  The relinquishing of cash is devotional, like Thai Buddhists offering gold leaf by pressing it into the statues of their teachers and monks.  Money is a form of communication of the ineffable appreciation we feel toward those of greatness in the moment of connection or the moment of realization of our indebtedness.  Buying is a different form of expression that is personal but not expressive.  When we buy, it is disconnected from artistry of the moment.  No lesser appreciation for sure.  It's different because it isn't social signaling, it's coveting.  When in concerts or in real-time scenarios we transmit our bounty upon another, it is an act of making sacrifice and conferring benefit.  The underlying meaning of it may be akin to "I hope you succeed!" or, "I relinquish my having so that you might have."  I'm glossing over the cultural complexity of the gesture surely.  Japanese verbs have subtle ways to distinguish the transfer/receipt of benefit according to seniority, societal position and degree of humility: Giving upward "ageru/agemasu", giving downward "kudasai/kudasaru", giving laterally "kureru/morau"  The psychological subtlety of the transfer of boons between individuals is scripted deeply within us, all the more accentuating how a plastic card or a piece of paper barely captures the breadth of expression we caring animals have.

The web of yesteryear has done a really good job of covering the coveting use case.  Well done web! Now, what do we build for an encore?  How can we emulate the other expressions of human intent that coveting and credit cards don't cover?

In the panic surrounding the current Covid pandemic, I felt a sense of being disconnected from the community I usually am rooted in.  I sought information about those affected internationally in the countries I've visited and lived in, where my friends and favorite artists live.  I sought out charitable organizations positioned there and gave them money, as it was the least I felt I could do to reach those impacted by the crisis remote from me.  Locally, my network banded together to find ways that we could mobilize to help those affected in our community.  We found that using the metaphor of "gift cards" (a paper coupon) could be used to foist cash quickly into the coffers of local businesses so they could meet short-term spending needs to keep their employees paid and their businesses operational even while their shops were forced into closure in the interest of posterity.   I found the process very slow and cumbersome as I had to write checks, give out credit cards (to places I never would typically share sensitive financial data) find email addresses for people to transmit PayPal to, and in come cases I had to resort to paper cash for those whom the web could not reach.

This experience made me keenly aware that the systems we have on the web don't replicate the ways we think and the ways that we express our generosity in the modern world.  As web developers, we need to enable a new kind of gesture akin to the act of tipping with cash in offline society.  Discussing this with my friend Aneil, he asserted that both anonymous donor platforms like Patreon and other block-chain currencies can fit the bill for addressing the donor need, if the recipient is set up to receive them. He cautioned that online transactions are held to a different standard than cash in US society because of “Know Your Customer” regulation which was put in place to stem the risk of money laundering through anonymous transactions. As we discussed the idea of peer-to-peer transactions in virtual environments, he pointed out, ”The way game companies get around that is to have consumers purchase in game credits that cannot be converted back into money.” The government is fine with people putting money into something. It’s the extraction from the flow of exchange in monetary sense that needs to be subject to the regulations designed for taxation and investment controls.

Patreon, like PayPal, is a cash-value paired system while virtual currencies such as Bitcoin, BAT and Etherium can be variable in exchange value for their coin. Blockchain ledger transactions trace exactly who gave what to whom. So, they are in theory able to comply with KYC restrictions even in situations where the exchange is relatively anonymous. Yet they are wildly different in terms of how the currency holders perceive their value. Aneil pointed out that Bitcoin is bad for online transactions because its scarcity model incentivizes people to hold onto it. It’s like gold, a slow currency. A valuable crypto currency therefore would slow down rather than facilitating donation and tipping. You need a currency that people are comfortable to hold for only short periods of time like the funds in a Kiva or Patreon wallet. If people are always withdrawing from the currency for fear of its losing value, then the currency itself isn’t stable enough to be the basis of a robust transaction system. For instance, when I was in Zimbabwe, where inflation is incredibly high for their paper currency, people wanted to get rid of it quickly for some other asset that lost value slower than the paper notes. Similarly, Aneil pointed out, any coin that you use to transact virtually could suffer the incentive to cash out quickly, which would drive the value of the asset in a fluid marketplace lower. Cash proxies don’t have an inherent value unless they are underpinned by an artificial or perceived scarcity mechanism.  The US government has an agency, the Federal Reserve, whose mission it is to ensure that money depreciates slowly enough that the underlying credit of the government stays stable and encourages growth of its economy.  Any other currency system would need the same.  Bitcoin can't be it because of its exceedingly high scarcity which leads to hoarding.  Until web developers solve this friction problem, web transactions and therefore web authorship will be stifled of support it needs to grow.

Understanding this underlying problem of financial sustainability, my colleague Anselm is working with crypto-currency enabler Coil to try to apply cyrpto-currency sponsorship to peer and creator/recipient exchanges on the web.  He envisions a future where users could casually exchange funds in a virtual, web-based or real-world "augmented reality" transaction without needing to exchange credit card or personal account data.  This may sound mundane, because the use-case and need is obvious, as we're used to doing it with cash every day.  The question it begs is, why can't the web do this?#!  Why do I need to exchange credit cards (sensitive data) or email (not sensitive but not public) if I just want to send credits or tips to somebody?  There was an early success in this kind of micropayments model when Anshe Chung became the world's first self-made millionaire by selling virtual goods to Second Life enthusiasts.  The LindenLabs virtual platform had the ability for users to pay money to other peer users inside the virtual environment.  With a bit more development collaboration, this kind of model may be beneficial to others outside of specific game environments.

Anselm's speech at AWE was to introduce the concept of a "tip-jar," something we're familiar with from colloquial offline life, for the nascent developer ecosystem of virtual and augmented reality web developers.  For most people who are used to high-end software being sold as apps in a marketplace like iTunes or Android Play Store, the idea that we would pay web pages may seem peculiar.  But it's not too far a leap from how we generally spend our money in society.  Leaving tips with cash is common practice for Americans.  Even when service fees are not required, Americans tend to tip generously.  Lonely Planet dedicates sections of its guidebooks to concepts of money and I've typically seen that Americans have a looser idea of tip amount than other countries.

Anselm and the team managing the "Grant for the Web" hope to bring this kind of peer-to-peer mechanism to the broader web around us by utilizing Coil's grant of $100 Million in crypto-currency toward achieving this vision. 

If you're interested in learning more about web-monetization initiative from Coil and Mozilla please visit: https://www.grantfortheweb.org/








Sunday, January 12, 2020

The Momentum of Openness - My Journey From Netscape User to Mozillian Contributor

Working at Mozilla has been a very educational experience over the past eight years.  I have had the chance to work side-by-side with many engineers at a large non-profit whose business and ethics are guided by a broad vision to protect the health of the web ecosystem.  How did I go from being on the front of a computer screen in 1995 to being behind the workings of the web now?  Below is my story of how my path wended from being a Netscape user to working at Mozilla, the heir to the Netscape legacy.  It's amazing to think that a product I used 24 years ago ended up altering the course of my life so dramatically thereafter.  But the world and the web was much different back then.  And it was the course of thousands of people with similar stories, coming together for a cause they believed in.

The Winding Way West

Like many people my age, I followed the emergence of the World Wide Web in the 1990’s with great fascination.  My father was an engineer at International Business Machines when the Personal Computer movement was just getting started.  His advice to me during college was to focus on the things you don't know or understand rather than the wagon-wheel ruts of the well trod path.  He suggested I study many things, not just the things I felt most comfortable pursuing.  He said, "You go to college so that you have interesting things to think about when you're waiting at the bus stop."  He never made an effort to steer me in the direction of engineering.  In 1989 he bought me a Macintosh personal computer and said, "Pay attention to this hypertext trend.  Networked documents is becoming an important new innovation."   This was long before the World Wide Web became popular in the societal zeitgeist.  His advice was prophetic for me.

After graduation, I moved to Washington DC and worked for a financial news wire that covered international business, US economy, World Trade Organization, G7, US Trade Representative, the Federal Reserve and breaking news that happened in the US capital.  This era stoked my interest in business, international trade and economics.  During my research (at the time, via a Netscape browser, using AltaVista search engine) I found that I could locate much of what I needed on the web rather than in the paid LexisNexis database, which I also had access to at the National Press Club in Washington, DC.

When the Department of Justice initiated its anti-trust investigation into Microsoft, for what was called anti-competitive practices against Netscape, my interest was piqued.  Philosophically, I didn’t particularly see what was wrong with Microsoft standing up a competing browser to Netscape.  Isn’t it good for the economy for there to be many competing programs for people to use on their PCs?  After all, from my perspective, it seemed that Netscape had been the monopoly of the browser space at the time.

Following this case was my first exposure to the ethical philosophy of the web developer community.  During the testimony, I learned how Marc Andressen, and his team of software developer pioneers, had an idea that access to the internet (like the underlying TCP/IP protocol) should not be centralized, or controlled by one company, government or interest group.  And the mission behind Mosaic and Netscape browsers had been to ensure that the web could be device and operating system agnostic as well.  This meant that you didn’t need to have a Windows PC or Macintosh to access it.

It was fascinating to me that there were people acting like Jiminy Cricket, Pinocchio's conscience, overseeing the future openness of this nascent developer environment.  Little did I know then that I myself was being drawn into this cause.  The more I researched about it, the more I was drawn in.  What I took away from the DOJ/Microsoft consent decree was the concept that our government wants to see our economy remain inefficient in the interest of spurring diversity of competitive economic opportunity, which it asserted would spur a plurality of innovations which could compete in the open marketplace to drive consumer choice and thereby facilitate lower consumer prices.  In the view of the US government, monopolies limit this choice, keep consumer prices higher, and stifle entrepreneurial innovation.  US fiscal and trade policy was geared toward the concept of creating greater open market access to the world markets, while driving prices for consumers lower in an effort to increase global quality of life for all participating economies it traded with.

The next wave of influence in my journey came from the testimony of the chairman of the Federal Reserve Bank in the US congress.  The Federal Reserve is the US Central Bank.  They would regularly meet at the G7 conference in Washington DC with the central bank heads of major economic influencing countries to discuss their centrally-managed interest rates and fiscal policies.  At the time, the Fed Chairman was Allan Greenspan.  Two major issues were top of the testimony agenda during his congressional testimonies in the late 1990’s.  First, the trade imbalances between the US (a major international importer) and the countries of Asia and South America (which were major exporters) who were seeking to balance out their trade deficits via the WTO and regional trade pacts.  In Mr. Greenspan’s testimonies, Congressional representatives would repeatedly ask whether the internet would change this trade imbalance as more of the services sector moved online.
As someone who used a dial-up modem to connect to the internet at home (DSL and cable/dish internet were not yet common at the time) I had a hard time seeing how services could offset a multi-billion dollar asymmetry between US and its trading partners.  But at one of Mr. Greenspan’s sessions with Barney Frank (One of the legislators behind the "Dodd-Frank" financial reform bill which passed post-financial crisis) asked Mr. Greenspan to talk about the impact of electronic commerce on the US economy.  Mr. Greenspan, always wont to avoid stoking market speculation, dodged the question saying that the Fed couldn’t forecast what the removal of warehousing cost could do in impacting market efficiency, therefore markets at large.  This speech stuck with me.  At the time they were discussing Amazon, a book seller which could avoid typical overhead of a traditional retailer by eliminating brick and mortar store fronts with their inventory stocking burdens from products consumer hadn't yet decided they wanted.  Amazon was able to source the books at the moment the consumer decided to purchase, which eliminated the inefficiency of retail.

It was at this time also that my company decided to transition its service to a web-based news portal as well.  In this phase, Mr. Greenspan cautioned against "irrational exuberance" where the stock market valuations of internet companies were soaring to dizzying proportions relative to the future value of their projected sales.  Amid this enthusiastic fervor, I decided that I wanted to move to Silicon Valley to enter the fray myself.  I decided that my contribution would be in conducting international market launches and business development for internet companies.
After a stint working in web development on websites with a small design agency, I found my opportunity to pitch a Japanese market launch for a leading search engine called LookSmart which was replicating the Inktomi-style distributed search strategy.  Distributed search was an enterprise business model called business to business (or B2B) providing infrastructure support for other companies like Yahoo, Excite, MSN, AOL and other portals that had their own dedicated audience or portals.

After my company was reasonably successful in Japan, Yahoo! Japan took interest in acquiring the company, and I moved back to the US to work with Yahoo! on distributing search services to other countries across Asia Pacific.  In parallel, Netscape had followed a bumpy trajectory.  AOL purchased the company and tried to fold it into its home internet subscriber service.  America Online (AOL) was a massively popular dialup modem service in the US at the time.  AOL had a browser of their own too.  But it was a "walled-garden" browser that tried to give users their "daily clicks" like news, weather and email, but didn't promote the open web.  It's easy to understand their perspective.  They wanted to protect their users from the untamed territory of the world wide web, which at the time they felt was too risky for the untrained user to venture out into.  It was a time of a lot of Windows viruses, pop-ups, scams, and few user protections.  AOL's stock had done really well based on their success in internet connectivity services.  Once AOL's stock valuation surpassed Netscape's valuation, they were able to execute an acquisition. 

The team at Netscape may have been disappointed that their world-pioneering web browser was being a acquired by a company that had a sheltered view of the internet and a walled garden browser, even if AOL had been pioneers in connecting the unconnected.  It may have been a time of a lot of soul searching for Marc Andressen's supporters, considering that the idea of Netscape had been one of decentralization, not corporate mergers. 

A group of innovators inside AOL suggested that the threat of a world dominated by Microsoft's IE browser was a risky future for the world of open competitive ecosystem of web developers.  So they persuaded the AOL executive team to set up a skunk-works team inside AOL to atomize the Netscape Communicator product suite into component parts that could then be uploaded into a modular hierarchical bug triage tree, called Bugzilla, so that people outside of AOL could help fix code problems that were too big for internal AOL teams alone to solve.  There is a really good movie about this phase in AOL's history called "Code Rush."

Mozilla project grew inside AOL for a long while beside the AOL browser and Netscape browsers.  But at some point the executive team believed that this needed to be streamlined.  Mitchell Baker, an AOL attorney, Brendan Eich, the inventor of JavaScript, and an influential venture capitalist named Mitch Kapoor came up with a suggestion that the Mozilla project should be spun out of AOL.  Doing this would allow all of the enterprises who had interest in working in open source versions of the project to foster the effort while Netscape/AOL product team could continue to rely on any code innovations for their own software within the corporation.

A Mozilla in the wild would need resources if it were to survive.  First, it would need to have all the patents that were in the Netscape patent portfolio to avoid hostile legal challenges from outside.  Second, there would need to be a cash injection to keep the lights on as Mozilla tried to come up with the basis for its business operations.  Third, it would need protection from take-over bids that might come from AOL competitors.  To achieve this, they decided Mozilla should be a non-profit foundation with the patent grants and trademark grants from AOL.  Engineers who wanted to continue to foster AOL/Netscape vision of an open web browser specifically for the developer ecosystem could transfer to working for Mozilla. 

Mozilla left Netscape's crowdsourced web index (called DMOZ or open directory) with AOL.  DMOZ went on to be the seed for the PageRank index of Google when Google decided to split out from powering the Yahoo! search engine and seek its own independent course.  It's interesting to note that AOL played a major role in helping Google become an independent success as well, which is well documented in the book The Search by John Battelle.

Once the Mozilla Foundation was established (along with a $2 Million grant from AOL) they sought donations from other corporations who were to become dependent on the project.  The team split out Netscape Communicator's email component as the Thunderbird email application as a stand-alone open source product and the Phoenix browser was released to the public as "Firefox" because of a trademark issue with another US company on usage of the term "Phoenix" in association with software. 

Google had by this time broken off from its dependence on Yahoo! as a source of web traffic for its nascent advertising business.  They offered to pay Mozilla Foundation for search traffic that they could route to their search engine traffic to Google preferentially over Yahoo! or the other search engines of the day.  Taking "revenue share" from advertising was not something that the non-profit Mozilla Foundation was particularly well set up to do.  So they needed to structure a corporation that could ingest these revenues and re-invest them into a conventional software business that could operate under the contractual structures of partnerships with other public companies.  The Mozilla Corporation could function much like any typical California company with business partnerships without requiring its partners to structure their payments as grants to a non-profit. 

When Firefox emerged from the Mozilla team, it rapidly spread in popularity, in part because they did clever things to differentiate their browser from what people were used to in the Internet Explorer experience such as letting their users block pop-up banners or customize their browser with add-ons.  But the surge in its usage came at a time when there was an active exploit capability in IE6 that allowed malicious actors to take-over the user's browser for surveillance or hacking in certain contexts.  The US government urged companies to stop using IE6 and to update to a more modern browser.  It was at this time I remember our IT department at Yahoo! telling all its employees to switch to Firefox.  And this happened across the industry.

Naturally as Firefox market share grew, because Mozilla was a non-profit, it had to reinvest all proceeds from their growing revenues back into web development and new features, so they began to expand outside the core focus of JavaScript, browser engines.  As demand for alternative web browsers surged, several Mozillians departed to work on alternative browsers.  The ecosystem grew suddenly with Apple and Google launching their own browsers.  As these varied browsers grew, the companies collaborated on standards that all their software would use to ensure that web developers didn't have to customize their websites to uniquely address idiosyncrasies  of the different browsers consumers had a choice of.

When I joined Mozilla, there were three major issues that were seen as potential threats to the future of the open web ecosystem.  1) The "app-ification" of the web that was coming about in new phones and how they encapsulated parts of the web, 2) The proliferation of dynamic web content that was locked in behind fragmented social publishing environments.  3) The proliferation of identity management systems using social logins that were cumbersome for web developers to utilize.  Mozilla, like a kind of vigilante super hero, tried to create innovative tactics to propose technologies to address each one of these.  It reminded me of the verve of the early Netscape pioneers to try to organize an industry toward the betterment of the problems the entire ecosystem was facing.
To discuss these different threads, it may be helpful to look at what had been transforming the web in years immediately prior.

What the Phone Did to the Web and What the Web Did Back

The web is generally based on html, CSS and JavaScript. A web developer would publish a web page once, and those three components would render the content of the webpage to any device with a web browser.  What we were going into in 2008 was an expansion of content publication technologies, page rendering capabilities and even devices which were making new demands of the web.  It was obvious to us at Yahoo! at the time that the industry was going through a major phase shift.  We were building our web services on mashups of content sources from many different sources.  The past idea of the web was based on static webpages that were consistent to all viewers.  What we were going toward was a sporadically-assembled web.  The concept of the static, consistent web of the 1990s was referred to as "web 1.0" in the web development community.  The new style was frequently called "mash-up" or "re-mix" using multi-paned web pages that would assemble multiple discreet sources of content at the time of page load.  We called this AJAX for asynchronous JavaScript and xml (extensible markup language) that allowed personalized web content to be rendered on demand.  Web pages of this era appeared like dashboards and would be constantly refreshing elements of the page as the user navigated within panes of the site.

In the midst of this shift to the spontaneously assembled dynamic web, Apple launched the iPhone.  What ensued immediately thereafter was a kind of developer confusion as Apple started marketing the concept that every developer wishing to be included in its phones needed to customize content offerings as an app tailored to the environment of the phone.  It was a kind of exclusion where the web developer had to parse their site into smaller sized chunks for ease of consumption in a smaller form factor and different user context than the desktop environment.

Seeing the launch of the iPhone, which sought to combine this wave of the personalized dynamic web, along with the elements of location based content discovery, seemed an outright rethinking of the potential of the web at large.  I was working on the AT&T partnership with Yahoo! at the time when Apple had nominated AT&T to be the exclusive carrier of choice for the iPhone launch.  Yahoo! had done its best to bring access to web content on low-end phones that industry professionals referred to as “feature phones”.  But these devices’ view of the web was incredibly restricted, like the AOL browser of the early web. Collaborating with Yahoo! Japan, we brought a limited set of mobile-ready web content to the curated environment of the NTT Docomo “iMode” branded phones.  We tried to expand this effort to the US.  But it was not a scalable approach.  The broader web needed to adapt to mobile.  No curatorial effort to bootstrap a robust mobile web would achieve broad adoption.

The concept behind the iPhone was to present the breadth of the web itself to the phone of every person.  In theory, every existing webpage should be able to render to the smaller screen without needing to be coded uniquely.  Håkon Wium Lie had created the idea of CSS (the cascading style sheet) which allowed an html coded webpage to adapt to whatever size screen the user had.  Steve Jobs had espoused the idea that content rendered for the iPhone should be written in html5. However, at the time of the phone’s release, many websites had not yet adapted their site to the new standard means of developing html to be agnostic of the device of the user.  Web developers were very focused on the then-dominant desktop personal computer environment. While many web pioneers had sought to push the web forward into new directions that html5 could enable, most developers were not yet on board with those concepts. So the idea of the “native mobile app” was pushed forward by Apple to ensure the iPhone had a uniquely positive experience distinct from the experience every other phone would see, a poorly-rendered version of a desktop focused website.

The adoption of the modern web architecture that existed in html5 hadn't reached broad developer appeal at the time that the market opportunity of iPhone and Android emerged.  Mozilla saw it as a job that it could tackle: The de-appification of the app ecosystem.  Watching this ambitious project was awe inspiring for everyone who contributed to the project at the time.  Mozilla's Chief Technical Officer, Brendan Eich, and his team of engineers decided that we could make a 100% web phone without using the crutch of app-wrappers.  The team took an atomized view of all the elements of a phone and sought to develop a web-interface to allow each element of the device to speak web protocols such that a developer could check battery life, status of motion, gesture capture or other important signals relevant to the mobile user that hadn't been utilized in the desktop web environment.  And they did it.  The phone was app-less with everything running in JavaScript on user demand.  The phones launched in 28 countries around the world. I worked on the Brazilian market launch, where there was dizzy enthusiasm about the availability of a lower cost smart phone based on open source technology stack. 

As we prepared for the golive of the “FirefoxOS” phone launch in Brazil, the business team coordinated outreach through the largest telecommunications carriers to announce availability (and shelf space in carrier stores) for the new phones as I and the Mozilla contributors in Brazil reached out to the largest websites in the country to “consent” to listing their sites as web-applications on the devices.  Typically, when you buy a computer, web services and content publishers aren’t “on” the device, content publishers are just accessible via the device’s browsers.  But iPhone and Android’s trend of “appification” of web content was so embedded in people’s thinking that many site owners thought they needed to do something special to be able to provide content and services to our phone’s users.  Mozilla therefore borrowed the concept of a “marketplace” which was a web-index of sites that had posted their site’s availability to FirefoxOS phone users. 

Steve Jobs was a bit haunted by the app ecosystem he created.  It became a boon for his company, with Apple being able to charge a toll of $.99 or more for content that was already available on the internet for free.  But he urged the developer community to embrace html5 even while most developers were just plopping money down to wrap their web content in the iTunes-required app packaging.  (The iPhone grew out of the Apple Music player project called iPod, which is why every app the phone needed had to be installed from the music player application “iTunes” Apple included on every device it sold for distributing music and podcasts.)  Companies such as Phonegap, and Titanium, popped up to shim web content to the app packaging frameworks required by the Google-acquired Android platform and Apple iTunes.  But the idea of using shims and app wrappers was an inelegant solution to advancing the web’s gradual embracing of the open web. Something needed to change to de-appify the untidy hacks of the Jobs era.  And this is going on to this day. 

Mozilla’s engineers suggested that there shouldn’t be the concept of a “mobile web”.  And we should do everything we can to persuade web developers and content publishers to embrace mobile devices as “1st class citizens of the web.”  So they hearkened back to the concepts in CSS, a much earlier development of web architecture mentioned previously, and championed the concept of device-aware responsive web design with a moniker of “Progressive Web Apps.”  The PWA concept is not a new architecture per se.  It’s the idea that a mobile-enhanced internet should be able to do certain things that a phone wielding user expects it to do.  So a webmaster should take advantage of certain things a user on the move might expect differently from a user sitting at a desktop computer.  PWA work is being heavily championed by Google for the Android device ecosystem now, because it is larger than the iPhone ecosystem, and also because Google understands the importance of encouraging the seamless experience of web content agnostic of which device you happen to possess.

After the launch of the phone, because Mozilla open sources its code, many other companies picked up and furthered the vision.  Now the operating system has been forked into TVs, smart watches, micro-computers and continues to live on in phones under different brand names to this day.  In addition, the project of the atomized phone with hardware elements that can speak https for networking with other devices is now expanded to the current Internet of Things project in Mozilla’s Emerging Technologies group to bring the hardware products we buy (which all speak relatively incompatible radio frequencies) to the common lingua franca of the protocols of the internet.  Not everyone has a Mozilla phone in their pocket. But that was never a goal of the project. 
This brings me to one of the concepts that I appreciate most about Mozilla and the open source community.  An idea can germinate in one mind, be implemented in code, then set free in the community of open source enthusiasts.  Then, anyone can pick it up and innovate upon it. While the open sourcing of Netscape wasn’t the start of this movement, it has contributed significantly to the practice.  The people who created the world wide web continue to operate under the philosophy of extensibility. The founders of Google’s Chromium project were also keen Mozillians. The fact that a different company, with a different code base, created a similarly thriving open source ecosystem of developers aiming to serve the same user needs as Mozilla’s is the absolute point of what Mozilla’s founders set out to promote in my view.  And it echoes those same sentiments I’d heard expressed back in Washington, DC back in the early 1990’s. 

One of the things that I have studied a great deal, with fervor, fascination and surprise, was the concept of the US patent system.  Back in the early days of the US government, Secretary of State Jefferson created the concept of a legal monopoly. It was established by law for the government first, then expanded to the broader right of all citizens, and later all people globally via the US Patent and Trademark Office.  I had an invention that I wished to patent and produce for the commercial market. My physics professor suggested that I not wait until I finish my degree to pursue the project. He introduced me to another famous inventor from my college and suggested I meet with him. Armed with great advice I went to the USPTO to research prior art that might relate to my invention.  Upon thorough research, I learned that anyone in the world can pursue a patent and be given a 17 year monopoly option to protect the invention while the merits of the market fit could be tested. Thereafter, the granted patent would belong as open source, free of royalties to the global community. “What!?” thought I. I declare the goods to the USPTO so they can give it away to all humanity shortly thereafter once I did all the work to bring it to market?  This certainly didn’t seem like a very good deal for inventors in my view.  But it also went back to my learnings about why the government prefers certain inefficiencies to propagate for the benefit of the greater common good of society.  It may be that Whirlpool invented a great washing machine. But Whirlpool should only be able to monopolize that invention for 17 years before the world at large should be able to reap the benefits of the innovation without royalties due to the inventor.

My experiences with patents at Yahoo! were also very informative.  Yahoo! had regularly pursued patents, including for one of the projects I launched in Japan.  But their defense of patents had been largely in the vein of the “right to operate” concept in a space where their products were similar to those of other companies which also had patents or amicable cross-licensing with other organizations that operated in a similar space.  (I can’t speak for Yahoo!’s philosophical take on patents as I don’t represent them. But these opinions stem from how I observed them enforcing their patent rights for formally granted USPTO patents and how they exercised those rights in the market.) I believed that the behaviors of Yahoo!, AOL and Google were particularly generous and lenient.  As an inventor myself, I was impressed with how the innovators of Silicon Valley, for the most part, did not pursue legal action against each other. It seemed they actually promoted the iteration upon their past patents. I took away from this that Silicon Valley is more innovation focused than business focused. When I launched my own company, I asked a local venture capitalist whether I should pursue patents for a couple of the products I was working on .  The gentleman who was a partner at the firm said, paraphrasing: “I prefer action over patents. Execute your business vision and prove the market value. Execution is more valuable than ideas. I’d rather invest in a good executor than an inventor.” And from the 20 years I’ve seen here, it always seems to be the fast follower rather than the inventor who gets ahead, probably precisely because they focus on jumping directly to execution rather than spending time scrawling protections and illustrations with lawyers.

Mozilla has, in the time I’ve worked with them, focused on implementing first in the open, without thinking that an idea needed to be protected separately.  Open source code exists to be replicated, shared and improved. When AOL and the Mozilla project team open sourced the code for Netscape, it was essentially opening the patent chest of the former Netscape intellectual property for the benefit of all small developers who might wish to launch a browser without the cumbersome process of watching out for the licenses for the code provenance.  Bogging down developers with patent “encumbered” code would slow those developers from seeking to introduce their own unique innovations. Watching a global market launch of a new mobile phone based on entirely open source code was a phenomenal era to witness. And it showed me that the benevolent community of Silicon Valley’s innovators had a vision much akin to those of the people I’d witness in Washington DC.  But this time I’d seen it architected by the intentional acts of thousands of generous and forward-thinking innovators rather than through the act of legislation or legal prompting of politicians.

The Web Disappears Behind Social Silos

The web 2.0 era, with its dynamically assembled web pages, was a tremendous advance for the ability of web developers to streamline user experiences.  A page mashed-up of many different sources could enhance the user’s ability to navigate across troves of information that would take a considerable amount of time to click and scroll through.  But something is often lost when something else is gained. When Twitter introduced its micro-blog platform, end users of the web were able to publicize content they curated from across the web much faster than having to author full blog posts and web pages about content they sought to collate and share.  Initially, the Twitter founders maintained an open platform where content could be mashed-up and integrated into other web pages and applications. Thousands of great new visions and utilities were built upon the idea of the open publishing backbone it enabled. My own company and several of my employers also built tools leveraging this open architecture before the infamous shuttering of what the industry called “The Twitter Firehose”.  But it portended a phase shift yet again of the very nascent era of the newly invented social web. The Twitter we knew of became a diaspora of sorts as access to the firehose feed was locked down under identity protecting logins. This may be a great boon to those seeking anonymity and small “walled gardens” of circles of friends. But it was not particularly good for what may of the innovators of web 2.0 era hoped for the greater enfranchisement of web citizenry. 
Many of the early pioneers of the web wanted to foster a web ecosystem where all content linked on the web could be accessible to all, without hurdles on the path that delayed users or obscured content from being sharable.  Just as the app-ified web of the smartphone era cordoned off chunks of web content that could be gated by a paywall, the social web went into a further spitting of factions as the login walls descended around environments that users had previously been able to easily publish and share content across.

The parts of the developer industry that weren’t mourning the loss of the open-web components of this great social fragmentation were complaining about psychological problems that emerged once-removed from the underlying cause.  Fears of censorship and filter-bubbles spread through the news. The idea that web citizens now had to go out and carefully curate their friends and followers led to psychologists criticizing the effect of social isolation on one side and the risks of altering the way we create genuine off-line friendships on the other. 

Mozilla didn’t take a particular stance on the philosophical underpinnings of the social web.  In a way the Bugzilla platform we used to build and maintain Firefox and Thunderbird were purpose-built social networks of collaboration with up-voting and hierarchical structures.  But it was all open source like the code-commits that is housed were. We did have discussions around codes of conduct of our Bugzilla community, geared to ensuring that it remained a collaborative environment where people from all walks of life and all countries could come together and participate without barriers or behaviors that would discourage or intimidate open participation.

But there were certain specific problems that social utilities introduced to the web architecture in terms of the code they required webmasters to integrate for them to be used.  So we focused on those. The first one we hit upon was the idea “log in with x” problem. In the United States, people love to watch race cars go around concrete tracks. They consider it a sport.  One of the most famous racing brands was called Nascar, which was famous for having their cars and drivers covered with small advertisements from their commercial sponsors.  As the social web proliferated, webmasters were putting bright icons on their websites with JavaScript prompts to sign in with five or more different social utilities.  We called this problem “Nascar” because the webmaster never knew which social site a user had an identity registered for. So if a user visited once and logged in with Twitter, and another time accidentally logging in with Facebook, their persona represented on the original site might be lost and irretrievable.  Mozilla thought this was something a browser could help with. If a user stored a credential, agnostic of which source, at the browser level, the user wouldn’t need to be constantly peppered with options to authenticate with 10 different social identities. This movement was well received initially, but many called a more prominent architecture to show the user where their logged identities were being stored.  So Mozilla morphed the concept from “BrowerID” to the idea of a Firefox Accounts tool that could be accessed across all the different copies of Firefox a user had (on PCs, Phones, TVs or wherever they browsed the web.) Mozilla then allowed users to synchronize their identities across all their devices with highly reliable cryptography to ensure data could never be intercepted between any two paired devices. 

Firefox Accounts has expanded to allow users to do synchronize secure session history, browser extensions and preferences, stored passwords (to prevent low risk key-stroke logging for those who were paranoid about that), file transmission with Firefox Send.  Over the years Firefox team has experimented with many common utilities that add to user convenience for leveraging their saved account data. And where Mozilla didn’t offer it, but an addon developer did, the Firefox Account could be used to synchronize those addon-based services as well.

The other great inconvenience of the social web was the steps necessary for users to communicate conventional web content on the social web.  Users would have to copy and paste URLs between browser windows if they wished to comment or share web content. Naturally there was a Nascar solution for that as well: If the web developers for every web site would put in a piece of JavaScript that users could click with a button to upvote or forward content that would solve everything right?  Yeah, sure. And it would also bog down the pages with lots of extraneous code that had to be loaded from different web servers around the internet as well. Turning every webpage into a Frankenstein hodge-podge of Nascar-ed promotions of Twitter and Facebook buttons didn’t seem like and elegant solution to Mozilla’s engineers either!

Fortunately, this was obvious to a large number of the web development and browser community as well.  So the innovative engineers of Mozilla, Google and others put their heads together on a solution that we could standardize across web browsers so that every single website in the world didn’t have to code in a solution that was unique to every single social service provider.  The importance of this was also accentuated with the website of the United States government’s White House integrated a social engagement platform that was found to be tracking the visits of people who visited the web page with tiny code snippets that the White House themselves hadn’t engineered.  People generally like the idea of privacy when their visiting web pages. The idea that a visit to read what the president had to say came along with a tradeoff that readers were going to be subsequently tracked because of that visit didn’t appeal to the site visitors any more than it didn’t appeal to the US government!

To enable a more privacy protecting web, yet enable the convenience users sought of engaging with social utilities, Mozilla’s engineers borrowed a concept from the progressive web app initiative.  PWAs, which were emulating the metaphors of user engagement on phones apps utilized the concept of a user “intent”. Just as a thermostat in a house expresses the thermostat’s setting as a “call for heat” from the houses’ furnace, there needed to be a means for a user to have an “intent to share”.  And as phone manufacturers had enabled the concept of sharing at the operating system level for the applications that users leveraged to express those intentions on the phone, a browser needed to have the same capability. 

At Mozilla we engaged these concepts a “Social APIs.”  An API is an abbreviated term to refer to a kind of hand-shake socket that can interface with another program.  It refers to application program interface. But it generally refers to any socketing capability that can be handled between a hardware, stand-alone software, or web service that can interface with another entity that is not controlled by the originating interface.  Microsoft’s Outlook email software can interface effortlessly with a Google Gmail account using an API if the user of the software authenticates for their program to make such requests to the user’s Gmail account without Microsoft or Google ever having to be directly involved in the authentication the user initiates.  Just as Firefox Accounts could sync services on behalf of a user without knowing any of the details of the accounts the user requested to sync, so too should it be able to recognize when a user wants to share something without having the user dance around between browser windows with copy and paste. 

So Mozilla promoted the concept of browsers supporting share intents, as well as notification intents so that our users didn’t have to always be logged into their social media accounts in order to be notified if something required their attention on any given social media account.  We did this with some great consideration. There was a highly-marketed trend in Silicon Valley at the time around “gamification.” This was a concept that web developers could you points and rewards to try to drive loyalty and return visits among web users. Notifications were heralded by some as a great way to drive the sense of delight for visitors of your website along with the opportunity to lure them back for more of your web goodness, whatever you offered.  Would developers over-notify, we wondered. There was a potential for oversaturation and distraction of user attention which could be a worse cost to the user’s attention and time than it was a benefit for them. 

Fortunately, we did not see huge notification abuses from the sites that supported Social API.  And we did we widespread interest from the likes of Facebook, Twitter, Yahoo!, Google which were the major messaging service providers of the day.  And so we jointly worked to uplevel this to the web standards body called the World Wide Web Consortium (abbreviated as the W3C) for promotion outside the Firefox ecosystem, so that it could be used across all web browsers which supported W3C standards. 

Working with this team I learned a great deal from my peers in the engineering organization.  First I thought, if this is such a great idea, why doesn’t Firefox try to make this a unique selling point of our software?  What’s the rush to standardize this? Jiminy Cricket voices across the organization pointed out, the goal of our implementation of open source code in the browser is precisely to have others adopt the greatest ideas and innovate upon them.  The purpose of the standards organizations we work with was to pass on those innovations so that everyone else could utilize them without having to adopt Firefox-specific code. Good ideas, like the USPTO’s concept of eventual dissemination to the broader global community, are meant to spread to the entire ecosystem so that webmasters avoid the pitfall of coding their website to a the functionality of a single piece of software or web browser.  Mozilla engineers saw their mission as in part being to champion web-compatibility, which they often shortened to webcompat in our discussions at developer events. Firefox is a massive population of addressable users. But we want web developers to code for all users to have a consistently great experience of the web, not just our audience of users. There is abroad group of engineers across Microsoft, Google, Apple, Samsung, Mozilla and many small software developers who lay down the flags of their respective companies and band together in the standards bodies to dream of a future internet beyond the capability of the software and web we have today.  They do this with a sense of commitment to the future we are creating for the next generation of internet, software and hardware developers who are going to follow in the footsteps after us. Just as we inherited code, process and standards from our forebearers. It is the yoke of our current responsibility to pass on the baton without being hampered by the partisanship of our competing companies. The web we want has to be built today if the future generations are going to be set up for success in the demands of the technology environment we will create for tomorrow.

Twice a year the executive team at Mozilla convene the team of people who support the Mozilla Foundation non-profit (and its daughter corporate entity that ships the software) in all-hands meetings where we discuss our part in this shared vision.  Our Chairwoman Mitchell Baker, who managed the Mozilla Project from the spin-out from the AOL organization many years ago gets up on stage to discuss the opportunities she and the foundation see as the web ecosystem evolves.  She speaks in rousing language with phrases like “The Web We Want” in order to instill our team of contributors with an inspiring sense of passion and responsibility. We all go off around the globe as denizens of this mission, carriers of the flag and inspiration, to try to champion and inspire others in turn. 
After one of these events I went off to muse on our projects with one of my mentors, an engineer named Shane Caraveo.  I’d been researching and thinking a lot about all the bluster and buzz that had been happening in the broader internet and press communities about social media platforms.  Facebook had been commissioning studies on the psychological benefits and pitfalls of social media use. I’d listened to their commentaries defending the new paradigms of the social web.  I asked Shane what he thought. Shouldn’t we be championing people go off and build their own web pages instead of facilitating greater facility of leveraging social platforms and tools? Shane pointed out that Mozilla does do that, especially around the Mozilla Developer Network that demonstrates with code examples exactly how to integrate various W3C code specs for website owners, systems administrators and general web enthusiasts.  Shane made a comment that sat with me for years after. “I don’t care how people create content and share it on the internet.  I care that they do.”

The First Acquisition

The standardization of web notifications across browsers one of the big wins of our project.  The other, for Mozilla specifically, was the acquisition of the Pocket platform.  When I worked at Yahoo!, one of the first web 2.0 acquisitions they had made was the bookmark backup and sharing service del.icio.us.  (The name was awkward because many of the companies of the day had given up the idea of paying for overpriced .com URLs in favor of a new surfeit of domains that had become available under the “.us” top level domain name space.) Our Yahoo! team had seen the spreading of the web-sharing trend, pre-Facebook, as one of the greatest new potentials for web publishers to disseminate their content, by the praise and subsequent desire to “re-post” content among circles of friends.  Many years later Yahoo! sold the cloud bookmarking business to the founder of YouTube who sought to rekindle the idea.  But another entrepreneur named Nate Wiener had taken a different approach to solving the same problem.  He’d built addons for web browsers to address the need for cloud-bookmarking.

Saving web content may seem like a particularly fringe use case for only the most avid web users.  But the Pocket service received considerable demand.  With funding from Google’s venture investing arm among others, Nate was able to grow Pocket to support archiving addons for Google’s Chrome browser, Android and iOS phones, and even expand into a destination website where users could browse the recommendations of saved content from other users in a small tight-knit group of curators.  (If this sounds to you like Netscape’s DMOZ project from 20 years ago and del.icio.us from 10 years ago, that was my thought too.)  But it was perhaps the decentralization of Pocket’s approach that made it work so well.  The community of contributors supporting it was web-wide!  And the refined stream of content coming out of its recommendations was very high quality journalism that was in no way influenced by the news publishing industry, which had its own approaches to content promotion.

When I first met the Pocket team, they commented that their platform was not inherently social.  So the constraints of the Social API architecture didn’t fit the needs of their users.  They suggested that we create a separate concept around “save” intents that were not fitting in the constraints of social media intents that the phones and services were pursuing at the time.  When Firefox introduced the “save” function in our own browser, it seemed to be duplicating the concept of the architecture of “Save to Bookmarks”+”Firefox Accounts Sync”.  But we found that a tremendous number of users were keen on Pocket save rather than the sync-bookmarks architecture we already had.
Because Google has already invested in Pocket, I had thought that it was more likely that they would join the Chrome team eventually.  But by a stroke of good fortune, the Pocket team had had a very good experience with working alongside the Mozilla team and decided that they preferred to join Mozilla to pursue the growth of their web services.  This was the first acquisition Mozilla had executed.  Because I had seen how acquisition integrations sometimes fared in Silicon Valley, I had some fascination to see how Mozilla would operate another company with its own unique culture.  Fortunately in my view, Pocket continues to support all browsers that compete with Firefox.  And the active community of Pocket users and contributors continues to stay robust and active to this day.

Protection of Anonymity

One of the most fascinating industry-wide efforts I saw at Mozilla was the campaign behind protecting user anonymity requests and initiatives to enable pseudonymity for users.  As social networking services proliferated in the Web 2.0 era, there were several mainstream services that sought to force users into a web experience where they could have only one single, externally verified, web identity.  The policy was lambasted in the web community as a form of censorship, where internet authors were blocked from using pen-names and aliases (The way Mark Twain authored books under his nom de plume rather than his birth name.)

On the flip side of the argument, proponents of the real-name policy theorized that anonymity of web identities led to trolling behaviors in social media, where people would be publicly criticized by anonymous voices who could avoid reputational repercussions.  This would, in theory, let those anonymous voices say things about others that were not constrained by the normal ethical decency pressures of daily society. 

Wired magazine wrote editorial columns against real names policies saying that users turn to the web to be whomever they want to be and express anonymously ideas that they couldn't without multiple pen-names.   A person’s web identity (Sometimes referred to as “handles” from the early CB radio practice of using declared identities in radio transmissions) would allow them to be more creative than they otherwise would.  One opinion piece suggested that the web is where people go to be a Humpty Dumpty assortment of diverse identities, not to be corralled together as a single source of identity. I myself had used multiple handles for my web pages.  I wanted my music hobby websites, photography website and business websites to all be distinct. In part, I didn’t want business inquiries to be routed to my music website. And I didn’t want my avocation to get tangled with my business either.

European governments jumped in to legislate the preservation of anonymity with laws referred to as “Right to be forgotten” which would force internet publishers to take down content if a user requested it.  In a world where content was already fragmented in a means detached from the initial author, how could any web publisher comply with individual requests for censorship? It wasn’t part of the web protocol to disambiguate names across the broader internet.  So reputation policing in a decentralized content publishing ecosystem proved tremendously complicated for web content hosts. 
Mozilla championed investigations, such as the Coral Project, to address the specific problems of internet trolling when it was targeted to public commenting platforms on news sites.  But as a relatively small player in the broader market, it would have been challenging to address a behavioral problem with open source code.  A broader issue was looming as a threat to Mozilla’s guiding principles. The emergence of behaviorally-targeted advertising that spanned across websites loomed as a significant threat to internet users’ right to privacy. 

The founders of Mozilla had decided to pen a manifesto of principles that they established to keep as the guiding framework for how they would govern projects that they intended to sponsor in the early days of the non-profit.  (The full manifesto can be read here: https://www.mozilla.org/en-US/about/manifesto/) In general, the developers of web software have the specific interests of their end users as their guiding light.  They woo customers to their services and compete by introducing new utilities and conveniences that contribute to the convenience and delight of their users.  But sometimes companies that make the core services we rely on themselves have to outsource some of the work they do to bring the service to us.  With advertising, this became a slippery slope of outsourcing. The advertising ecosystem’s evolution in the face of the Web 2.0 emergence, and the trade-offs publishers were making with regard to end-user privacy, became too extreme for Mozilla’s comfort.  Many outside Mozilla also believed the compromises in privacy that were being made were unacceptable, and so banded together in support of us.

While this is a sensitive subject that raises ire for many people, I can sympathize with the motivations of the various complicit parties that contributed to the problem.  As a web publisher myself, I had to think a lot about how I wanted to bring my interesting content to my audience. Web hosting cost increases with the volume of audience you wish to entertain.  The more people who read and streamed my articles, pictures, music and video content, the more I would have to pay each month to keep them happy and to keep the web servers running. All free web hosting services came with compromises.  So, eventually I decided to pay my own server fees and incorporate advertising to offset those fees.

Deciding to post advertising on your website is a concession to give up control.  If you utilize an ad network with dynamic ad targeting, the advertising platform makes the decision of what goods or services show up on your web pages.  When I was writing about drum traditions from around the world, advertisers may think my website was about oil drums, and it would show ads for steel barrels on my website.  As a web publisher, I winced. Oil barrels aren’t actually relevant to the people who read about African drums. But it paid the bills, so I tolerated it. And I thought my site visitors would forgive the inconvenience of seeing oil barrels next to my drums.

I was working at Yahoo! when the professed boon of behavioral advertising swept through the industry.  Instead of serving semantically derived keyword-matched ads for my drum web page, suddenly I could allow the last webpage you visited to buy “re-targeting” ads on my webpage to continue a more personally relevant experience for you, replacing those oil barrel ads with offers from sites that had been relevant to you in your personal journey yesterday, regardless of what my website was about.  This did result in the unsightly side effect that products you purchased on an ecommerce site would follow you around for months. But, it paid the bills. And it paid better than the mis-targeted ads. So more webmasters started doing it.

Behaviorally targeted ads seemed like a slight improvement in a generally under-appreciated industry at the start.  But because it worked so well, significant investment demand spurred ever more refined targeting platforms in the advertising technology industry.  And internet users became increasingly uncomfortable with what they perceived as pervasive intrusions of their privacy. Early on, I remember thinking, “They’re not targeting me, they’re targeting people like me.”  Because the ad targeting was approximate, not personal, I wasn’t overly concerned.

One day at Yahoo! I received a call.  It had been escalated though their customer support channels as a potential product issue.  As I was the responsible director in the product channel, they asked me if I would talk to the customer.  Usually, business directors don’t do customer support directly. But as nobody was available to field the call, I did.  The customer was actually receiving inappropriate advertising in their browser. It had nothing to do with a Yahoo! hosted page which has filters for such advertising.  But it was caused by a tracking cookie that the user, or someone who had used the user’s computer, had acquired in a previous browsing session. I instructed the user how to clear their cookie store on their browser, which was not a Yahoo! browser either, and the problem resolved.  This experience made me take to heart how deeply seriously people fear the perceived invasions of privacy from internet platforms. The source of the problem had not been related to my company. But this person had nobody to turn to to explain how web pages work. And considering how rapidly the internet emerged, it dawned on me that many people who’ve experienced the internet’s emergence in their lifetime likely couldn’t have had a mentor or teacher tell them about how these technologies worked.

Journalists started to uncover some very unsettling stories about how ad targeting can actually become directly personal.  Coupon offers on printed store receipts were revealing customers purchase behaviors which could highlight details of their personal life and even their health.  Because Mozilla’s principle #4 of the manifesto argued “Individuals’ security and privacy on the internet are fundamental and must not be treated as optional.” They decided to tackle the ills of personal data tracking on the web with the concept of open source code transmitted in browser headers, the handshake that happens between a computer and web server at the start of a browsing session. 
The most savvy web users do know what browser cookies are and where to find them, and how to clear them if needed.  But one of our security engineers pointed out to me that we don’t want our customers to always be chasing down errant irritating cookies and flushing their browser history compulsively.  This was friction, noise and inconvenience that the web was creating for the web’s primary beneficiaries. The web browser, as the user’s delegated agent, should be able to handle these irritations without wasting time of their customers, causing them to hunt down pesky plumbing issues in the preference settings of the software.  The major browser makers banded with Mozilla to try to eradicate this ill. 

At first it started with a very simple tactic.  The browser cookie had been invented as a convenience for navigation purposes.  If you visited a page and wanted to navigate back to it, there should be a “back button” that lets you find it without having to conduct another search.  This was the need the cookie solved.  Every web page you visit sets a cookie if they need to offer you some form of customization.  Subsequently, advertisers viewed a visit to their webpage as a kind of consent to be cookied, even if the visit happened inside a browser frame, called an inline frame (iframe). You visited Amazon previously, surely you’d want to come back, they assumed.  There should be a kind of explicit statement of trust which had been described as an "opt-in" even though a visit to a web destination was in no way a contract between a user and a host. Session history seemed like a good implied vector to define trust. Except that not all elements of a web page are served from a single source. Single origin was a very Web 1.0 concept.  Dynamically aggregated web pages pulled content, code and cookies from dozens of sources in a single page load in the modern web environment. 

The environment of trust was deemed to be the 1st-party relationship between a site a user visits in their web browser and the browser cookie store which was a temporary “cache of history” that could be used in short time frames.  Cookies and other history tracking elements that could be served in iframe windows of the webpage (the portion of the web page that web designers “outsource” to external content calls) were considered outside the environment of user-delegated trust in the 1st party.  They were called “3rd party cookies” and were considered ephemeral.

Browser makers tended to standardize the code handling of web content across their separate platforms by the W3C or other working groups.  And in order to create a standard, there had to be a reference architecture that multiple companies could implement and test. The first attempt at this was called “Do not track”, which was a preference that a user could set in their browser to have trackers quarantined or blocked for certain sessions.  The browser would submit a header file upon visit to a web server that would state the cookie is to be used for the session, not to endure to other site visits on other web pages thereafter. This seemed innocuous enough. It allowed the page architecture to remember the session just so long as necessary to complete the session.  And most viewed that the DNT setting in a browser was a simple enough statement of the trust environment between a web publisher and a visitor for the purpose of daily browser usage. 

All the major browser vendors addressed the concern of the government supervision with the concept that they should self-regulate.  Meaning, they should come to some general consensus that could be used across the industry between browser, publishers and advertisers on how to best serve people using their products and services without having to have government legislators mandate how code should be written or operate.  Oddly, it didn’t work so well. Eventually, certain advertisers decided to not honor the DNT header request. US Congress invited Mozilla to discuss what was happening and why some browsers and advertising companies decided to ignore the user preferences as stated by our shared code in browser headers. 

Our efforts to work in open source via DNT with the other industry parties was not ultimately protecting the users from belligerent tracking. It resulted in a whack-a-mole issue of what we referred to as "finger printing" where advertising companies were re-targeting off of computer or phone hardware aspects or even off the preference not to be tracked itself!  It was a bit preposterous to watch this happen across the industry and to hear the explanations by those doing it. What was very inspiring to watch on the other side was the efforts of the Mozilla product, policy and legal teams to push this concern to the fore without asking for legislative intervention.  Ultimately, European and US regulators did decide to step in to create legal frameworks to punitively address breaches of user privacy that were enabled by the technology of an intermediary.  Even after the launch of the European GDPR regulatory framework, the ensuing scandals around lax handling of user private data in internet services is now very widely publicized and at the forefront of technology discussions and education.

(More to come as the story unfolds)

Thursday, April 25, 2019

An Author-Optimized Social Network Approach

Sciam Art credit: jaybendt.com/
In this month’s edition of Scientific American magazine, Wade Roush comments on social networks' potential deleterious impact on emotional well-being. (Scientific American May 2019: Turning Off the Emotion Pump)  He prompts, "Are there better social technologies than Facebook?" and cites previous attempts such as now-defunct Path and still struggling Diaspora as potential promising developments. I don’t wish to detract from the contemporary concerns about notification overload and privacy leaks. But I’d like to highlight the positive side of social platforms for spurring creative collaboration and suggest an approach to potentially expand the positive impacts they can facilitate in the future. I think the answer to his question is: More diversity of platforms and better utilities needed. 


In our current era, everyone is a participant, in some way, in the authorship of the web. That's a profound and positive thing. We are all enfranchised in a way that previously most were not.  As an advocate for the power of the internet for advancing creative expression, I believe the benefits we've gained by this online enfranchisement should not be overshadowed by aforementioned bumps along the road.  We need more advancement, perhaps in a different way than has been achieved in most mainstream social platforms to date.  Perhaps it is just the utilization that needs to shift, more than the tools themselves. But as a product-focused person, I think some design factors could shape this change we'd need to see to have social networks be a positive force in everybody's lives. 


When Facebook turned away from "the Facebook Wall", its earliest iteration, I was fascinated by this innovation.  It was no longer a bunch of different profile destinations interlinked by notifications of what people said about each other. It became an atomized webpage that looked different to everyone who saw it, depending on the quality of contributions of the linked users.  The outcome was a mixed bag because the range of experiences of each visitor were so different. Some people saw amazing things, from active creators/contributors they'd linked to.  Some people saw the boredom of a stagnant or overly-narrow pool of peer contributors reflected back to them. Whatever your opinion of the content of Facebook, Twitter and Reddit, as subscription services they provide tremendous utility in today's web.  They are far superior to the web-rings and Open Directory Project of the 1990s, as they are reader-driven rather than author/editor driven. 


The experimental approach I'm going to suggest for advancement of next-generation social networks should probably happen outside the established platforms. For when experimentation is done within these services it can jeopardize the perceived user control and trust that attracted their users in the first place.   


In a brainstorm with an entrepreneur, named Lisa, she pointed out that the most engaging and involved collaborative discussions she'd seen had taken place in Ravelry and Second Life.  Knitting and creating 3D art takes an amazing amount of time investment.  She posited that it may be this invested time that leads to the quality of the personal interactions that happen on such platforms.  It may actually be the casualness of engagement on conventional public forums that makes those interactions more haphazard, impersonal and less constructive or considerate. Our brainstorm spread to how might more such platforms emerge to spur ever greater realization of new authorship, artistry and collaboration. We focused not on volume of people nor velocity of engagement, but rather greatest individual contribution. 


The focus (raison d'être) of a platform tends to skew the nature of the behaviors on it and can hamper or facilitate the individual creation or art represented based on the constraints of the platform interface. (For instance Blogger, Wordpress and Medium are great for long form essays. Twitter, Instagram and Reddit excel as forums for sharing observations about other works or references.) If one were to frame a platform objective on the maximum volume of individual contribution or artistry and less on the interactions, you'd get a different nature of network. And across a network of networks, it would be possible to observe what components of a platform contribute best to the unfettered artistry of the individual contributors among them. 


I am going to refer to this platform concept as "Mikoshi", because it reminds me of the Japanese portable shrines of the same name, pictured at right. In festival parades, dozens of people heft a one-ton shrine atop their shoulders.  The bobbing of the shrine is supposed to bring good luck to the participants and onlookers. The time I participated in a mikoshi parade, I found it to be exhausting effort, fun as it was.  The thing that stuck out to me was that that whole group is focused toward one end.  There were no detractors.   


Metaphorically, I see the mikoshi act of revelry as somewhat similar to the collaborative creative artistry sharing that Lisa was pointing out. In Lisa's example, there was a barrier to entry and a shared intent in the group. You had to be a knitter or a 3D artist to have a seat at the table. Why would hurdles create the improved quality of engagement and discourse? Presumably, if you're at that table you want to see others succeed and create more! There is a certain amount of credibility and respect the community gives contributors based on the table-stakes of participation that got them there.  This is the same with most other effort-intensive sharing platforms, like Mixcloud and Soundcloud, where I contribute. The work of others inspires us to increase our level of commitment and quality as well.  The shared direction, the furtherance of art, propels ever more art by all participants.  It virtuously improves in a cycle.  This drives greater complexity, quality and retention with time.   


To achieve a pure utility of greatest contributor creation would be a different process than creating a tool optimized purely for volume or velocity of engagement. Lisa and I posited an evolving biological style of product "mutation" that might create a proliferating organic process, driven by participant contribution and automated selection of attributes observed across the most healthy offshoot networks. Maximum individual authorship should be the leading selective pressure for Mikoshi to work. This is not to say that essays are better than aphorisms because of their length. But the goal to be incentivized by a creativity-inspiring ecosystem should be one where the individuals participating feel empowered to create to the maximum extent. There are other tools designed for optimizing velocity and visibility, but those elements could be detrimental to individual participation or group dynamics. 


To give over control to contribution-driven optimization as an end, it would Mikoshi would need to be a modular system akin to the Wordpress platform of Automattic. But platform mutation would have to be achieved agnostic of author self-promotion. The optimizing mutation of Mikoshi would need to be outside of the influence of content creators' drive for self promotion. This is similar to the way that "Pagerank" listened to interlinking of non-affiliated web publishers to drive its anti-spam filter, rather than the publishers' own attempts to promote themselves. Visibility and promulgation of new Mikoshi offshoots should be delegated to a different promotion-agnostic algorithm entirely, one looking at the health of a community of active authors in other preceding Mikoshi groups. Evolutionary adaptation is driven by what ends up dying. But Mikoshi would be driven by what previously thrived.


I don't think Mikoshi should be a single tool, but an approach to building many different web properties. It's centered around planned redundancy and planned end-of-life for non-productive forks of Mikoshi. Any single Mikoshi offshoot could exist indefinitely. But ideally, certain of them would thrive and attract greater engagement and offshoots.

The successive alterations of Mikoshi would be enabled by its capability to fork, like open source projects such as Linux or Gecko do.  As successive deployments are customized and distributed, the most useful elements of the underlying architecture can be notated with telemetry to suggest optimizations to other Mikoshi forks that may not have certain specific tools.  This quasi-organic process, with feedback on the overall contribution "health" of the ecosystem represented by participant contribution, could then suggest attributes for viable offshoot networks to come.  (I'm framing this akin to a browser's extensions, or a Wordpress template's themes and plugins which offer certain optional expansions to pages using past templates of other developers.)  The end products of Mikoshi are multitudinous and not constrained.  Similar to Wordpress, attributes to be included in any future iteration are at the discretion of the communities maintaining them.


Of course Facebook and Reddit could facilitate this.  Yet, "roll your own platform" doesn't fit their business models particularly.  Mozilla, manages several purpose-built social networks for their communities. (Bugzilla and Mozillians internally,  and the former Webmaker and new Hubs for web enthusiasts)  But Mikoshi doesn't particularly fit their mission or business model either.  I believe Automattic is better positioned to go after this opportunity, as it already powers 1/3 of global websites, and has competencies in massively-scaled hosting of web pages with social components. 


I know from my own personal explorations on dozens of web publishing and media platforms that they have each, in different ways, facilitated and drawn out different aspects of my own creativity.  I've seen many of these platforms die off.  It wasn't that those old platforms didn't have great utility or value to their users.  Most of them were just not designed to evolve.  They were essentially too rigid, or encountered political problems within the organizations that hosted them.  As the old Ani Difranco song "Buildings and Bridges" points out, "What doesn't bend breaks." (Caution that lyrics contain some potentially objectionable language.)  The web of tomorrow may need a new manner of collaborative social network that is able to weather the internal and external pressures that threaten them.  Designing an adaptive platform like Mikoshi may accomplish this.  

Sunday, April 14, 2019

My 20 years of web

Twenty years ago I resigned from my former job at a financial news wire to pursue a career in San Francisco.  We were transitioning our news service (Jiji Press, a Japanese wire service similar to Reuters) to being a web-based news site.  I had followed the rise and fall of Netscape and the Department of Justice anti-trust case on Microsoft's bundling of IE with Windows.  But what clinched it for me was a Congressional testimony of the Federal Reserve Chairman (the US central bank) about his inability to forecast the potential growth of the Internet.

Working in the Japanese press at the time gave me a keen interest in international trade.  Prime Minister Hashimoto negotiated with United States Trade Representative Mickey Cantor to enhance trade relations and reduce protectionist tariffs that the countries used to artificially subsidize domestic industries.  Japan was the second largest global economy at the time.  I realized that if I was going to play a role in international trade it was probably going to be in Japan or on the west coast of the US.
 I decided that because Silicon Valley was the location where much of the industry growth in internet technology was happening, that I had to relocate there if I wanted to engage in this industry.  So I packed up all my belongings and moved to San Francisco to start my new career.

At the time, there were hundreds of small agencies that would build websites for companies seeking to establish or expand their internet presence.  I worked with one of these agencies to build Japanese versions of clients' English websites.  My goal was to focus my work on businesses seeking international expansion.

At the time, I met a search engine called LookSmart, which aspired to offer business-to-business search engines to major portals. (Business-to-Business is often abbreviated B2B and is a tactic of supporting companies that have their own direct consumers, called business-to-consumer, which is abbreviated B2C.)  Their model was similar to Yahoo.com, but instead of trying to get everyone to visit one website directly, they wanted to distribute the search infrastructure to other companies, combining the aggregate resources needed to support hundreds of companies into one single platform that was customized on demand for those other portals.

At the time LookSmart had only English language web search.  So I proposed launching their first foreign language search engine and entering the Japanese market to compete with Yahoo!'s largest established user base outside the US.  Looksmart's President had strong confidence in my proposal and expanded her team to include a Japanese division to invest in the Japanese market launch.  After we delivered our first version of the search engine, Microsoft's MSN licensed it to power their Japanese portal and Looksmart expanded their offerings to include B2B search services for Latin America and Europe.

I moved to Tokyo, where I networked with the other major portals of Japan to power their web search as well.  Because at the time Yahoo! Japan wasn't offering such a service, a dozen companies signed up to use our search engine.  Once the combined reach of Looksmart Japan rivaled that of the destination website of Yahoo! Japan, our management brokered a deal for LookSmart Japan to join Yahoo! Japan.  (I didn't negotiate that deal by the way.  Corporate mergers and acquisitions tend to happen at the board level.)

By this time Google was freshly independent of Yahoo! exclusive contract to provide what they called "algorithmic backfill" for the Yahoo! Directory service that Jerry Yang and David Filo had pioneered at Stanford University.  Google started a B2C portal and started offering of their own B2B publishing service by acquiring Yahoo! partner Applied Semantics, giving them the ability to put Google ads into every webpage on the internet without needing users to conduct searches anymore.  Yahoo!, fearing competition from Google in B2B search, acquired Inktomi, Altavista, Overture, and Fast search engines, three of which were leading B2B search companies.  At this point Yahoo!'s Overture division hired me to work on market launches across Asia Pacific beyond Japan.

With Yahoo! I had excellent experiences negotiating search contracts with companies in Japan, Korea, China, Australia, India and Brazil before moving into their Corporate Partnerships team to focus on the US search distribution partners.

Then in 2007 Apple launched their first iPhone.  Yahoo! had been operating a lightweight mobile search engine for html that was optimized for being shown on mobile phones.  One of my projects in Japan had been to introduce Yahoo!'s mobile search platform as an expansion to the Overture platform.  However, with the ability for the iPhone to actually show full web pages, the market was obviously going to shift.

I and several of my colleagues became captivated by the potential to develop specifically for the iPhone ecosystem.  So I resigned from Yahoo! to launch my own company, ncubeeight.  Similar to the work I had been doing at LookSmart and prior, we focused on companies that had already launched on the desktop internet that were now seeking to expand to the mobile internet ecosystem.

Being a developer in a nascent ecosystem was fascinating.  But it's much more complex than the open internet because discovery of content on the phone depends on going through a marketplace, which is something like a business directory.  Apple and Google knew there were great business models of being a discovery gateway for this specific type of content.  Going "direct to consumer" is an amazing challenge of marketing on small-screen devices.  And gaining visibility in Apple iTunes and Google Play is even more challenging a marketing problem that publicizing your services on the desktop Internet. 

Next I joined the Mozilla to work on the Firefox platform partnerships.  It has been fascinating working with this team, which originated from the Netscape browser in the 1990's and transformed into an open-source non-profit focusing on the advancement of internet technology in conjunction rather than solely in competition with Netscape's former competitors.

What is interesting to the outside perspective is most likely that companies that used to compete against each other for engagement (by which I mean your attention) are now unified in the idea of working together to enhance the ecosystem of the web.  Google, Mozilla and Apple now all embrace open source for the development of their web rendering engines.  Now these companies are beholden to an ecosystem of developers who create end-user experiences as well as the underlying platforms that each company provides as a custodian of the ecosystem.   The combined goals of a broad collaborative ecosystem are more important and impactful than any single platform or company.  A side note: Amazon is active in the wings here, basing their software on spin-off code from Google's Android open source software.  Also, after their mobile phone platform faltered they started focusing on a space where they could completely pioneer a new web-interface, voice.

When I first came to the web, much of what it was made up of was static html.  Over the past decade, web pages shifted to dynamically assembled pages and content feeds determined by individual user customizations.    This is a fascinating transition that I witnessed while at Yahoo! which has been the subject of many books.   (My favorite being Sarah Lacy's Once You're Lucky, Twice You're Good.)

Sometimes in reflective moments, one things back to what one's own personal legacy will be.  In this industry, dramatic shifts happen every three months.  Websites and services I used to enjoy tremendously 10 or 20 years ago have long since been acquired, shut down or pivoted into something new.  So what's going to exist that you could look back on after 100 years?  Probably very little except for the content that has been created by website developers themselves.  It is the diversity of web content accessible that brings us everyday to the shores of the world wide web.

There is a service called the Internet Archive that registers historical versions of web pages.  I wonder what the current web will look like from the future perspective, in this current era of dynamically-customized feeds that differ based on the user viewing them.  If an alien landed on Earth and started surfing the history of the Internet Archive's "Wayback Machine", I imagine they'll see a dramatic drop-off in content that was published in static form after 2010.

The amazing thing about the Internet is the creativity it brings out of the people who engage with it.  Back when I started telling the story of the web to people, I realized I needed to have my own web page.  So I needed to figure out what I wanted to amplify to the world.  Because I admired folk percussion that I'd seen while I was living in Japan, I decided to make my website about the drums of the world.  I used a web editor called Geocities to create this web page you see at right.  I decided to leave in its original 1999 Geocities template design for posterity's sake.  Since then my drum pursuits have expanded to include various other web projects including a YouTube channel dedicated to traditional folk percussion.  A flickr channel dedicated to drum photos.  Subsequently I launched a Soundcloud channel and a Mixcloud DJ channel for sharing music I'd composed or discovered over the decades.

The funny thing is, when I created this website, people found me who I never would have met or found otherwise.  I got emails from people around the globe who were interested in identifying drums they'd found.   Even Cirque de Soleil wrote me asking for advice on drums they should use in their performances!

Since I'd opened the curtains on my music exploration, I started traveling around to regions of the world that had unique percussion styles.  What had started as a small web development project became a broader crusade in my life, taking me to various remote corners of the world I never would have traveled to otherwise.  And naturally, this spawned a new website with another Youtube channel dedicated to travel videos.

The web is an amazing place where we can express ourselves, discover and broaden our passions and of course connect to others across the continents. 

When I first decided to leave the journalism industry, it was because I believed the industry itself was inherently about waiting for other people to do or say interesting things.  In the industry I pursued, the audience was waiting for me do to that interesting thing myself.  The Internet is tremendously valuable as a medium.  It has been an amazing 20 years watching it evolve.  I'm very proud to have had a small part in its story.  I'm riveted to see where it goes in the next two decades!  And I'm even more riveted to see where I go, with its help.

On the web, the journey you start seldom ends where you thought it would go!