tag:blogger.com,1999:blog-42887469316059153782024-02-15T19:30:08.938-08:00ncubeeightMusings on web development, apps and the future of the internet.ncubeeighthttp://www.blogger.com/profile/07874174268038500927noreply@blogger.comBlogger35125tag:blogger.com,1999:blog-4288746931605915378.post-38506092999358813492023-10-15T14:08:00.057-07:002024-02-15T19:29:37.318-08:00Using VR comfortainment to bring an end to the US blood supply shortage<p>I conducted my MBA during a fascinating time in our world economy. We’d endured through a pandemic that shut down significant portions of our economy for nearly a year followed by surging interest rates as government response to the pandemic resulted in significant inflation and subsequent layoffs in my region. While this was a dramatic time for the world, it was a fascinating time to return to academia and evaluate the impacts to the global economy of natural and artificial stimuli.<br /></p><div class="separator" style="clear: both; text-align: center;"><a href="https://www.redcross.org/about-us/news-and-events/press-release/2022/blood-donors-needed-now-as-omicron-intensifies.html" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;" target="_blank"><img border="0" data-original-height="637" data-original-width="995" height="205" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi_z5TFuiXggualJzB-JoG558m1JT-yV-IB8drzE-waTcLXsX5TEsuA5JKGxRVSZ3tgwezLC59tLeChaycuK4fmXeLtQ3c33olQjG-Bk5XzPvldknV1_8Zdww0xZi9AUCsSE3GY3bKm12DDtgpvs9DLSE1tC5SNGisqcW1OOAHfkvUOhuHMWhvS4ug6LOQ/s320/Blood%20Crisis%20RedCross.png" width="320" /></a></div><p>For our masters thesis we were asked to identify an opportunity in the economy that could be addressed by a new business entrant. In discussing with several of my MBA class cohort, we decided to focus on the blood supply shortage that resulted from the <a href="https://www.redcross.org/about-us/news-and-events/press-release/2022/blood-donors-needed-now-as-omicron-intensifies.html" target="_blank">end of the pandemic</a>. Why would the US go into blood crisis at the end of the pandemic we wondered. Shouldn’t that have been expected during the peak of the pandemic in 2020 or 2021? But it turns out that during the pandemic surgeries and car crashes dropped at the same time that blood intake to the supply dropped. It was only after the pandemic ended that supply and demand got out of sync. In 2022 people started going to hospitals again (and getting injured at normal rates) while the blood donor pool had significantly shrunk and not recovered its pre-pandemic rate of participation. So hospitals were running out of blood. What's more concerning is that it looks as if the drop in donor participation isn't a short term aberration. Something needs to shift in the post-pandemic world to return the US to a stable blood supply. This was a fascinating subject for study.<br /><br />As we began our studies we interviewed staff at blood banks and combed through the press to understand what was taking place at this time. There were several key factors in the drop-off of donors. Long-Covid had impacted 6% of the US population, potentially impacting willingness to donate among those individuals who’d participated before. (Even though blood banks accept donations from donors who have recovered from Covid, the feeling that one's health is not at full capacity impacts the sentiment one has about passing on blood to another.) At the same time there was a gradual attrition of baby boomer generation leaving the donor pool while younger donors were not replacing them due to generational cultural differences. Finally, the new hybrid-work model companies adopted post-pandemic meant that blood-mobile drives that took place at companies, schools and large organizations could no longer receive the same turnout for blood drives that had formerly taken place at those locations. <br /><br />The donation pool we’ve relied on for decades requires several things. So we tried to identify those aspects that were in the control of the blood banks directly:<br /></p><ul style="text-align: left;"><li>First, an all-volunteer unpaid donor pool requires a large number of people in the US (~7 million) willing to help due to their own internal motivations and having the ample time to do so. Changing people’s attitudes toward volunteerism and blood donation is hard to do while marketing efforts to achieve this are expensive. In an era when more people are having to work multiple jobs, the flexibility to volunteer extra time is becoming constrained. There is likely going to be an ever worsening trend of time scarcity among would-be donors in contrast to the pre-pandemic times.<br /></li><li>Second, there needs to be elasticity in eligible donor pool to substitute for ill would-be donors in times of peak demand. Fortunately, this year FDA has started expanding eligibility criteria in reaction to the blood crisis, permitting people who were previously restricted from donating to participate now. However, this policy matter is is outside the control of blood banks themselves. Blood demand is seasonal, peaking in winter and summer. But donors are consistent and are difficult to entice when need spikes due to their own seasonal illnesses or summer travel plans.</li><li>Third, and somewhat within the control of blood banks, is in-clinic engagement and behaviors. Phlebotomists can try to persuade upgrades in donor time during donor admission and pre-screening. This window of time when an existing donor is sitting in clinic is the best time to promote persistent return behaviors. Improving the method of how this is achieved is the best immediate lever to bolstering the donor pool toward a resilient blood supply. But should we saddle our phlebotomists with the task of marketing and up-selling donor engagement? <br /></li></ul><p>Considering that there is no near-term solution to the population problem of the donor pool, we need to do something to bolster and expand the engagement of the remaining donors we have. In our studies we came across several interesting references. "If only one more percent of all Americans would
give blood, blood shortages would disappear for the foreseeable
future." (Source <a href="https://www.givingblood.org/about-blood/blood-facts.aspx" target="_blank">Community Blood Center</a>) This seems small. But currently approximately 6.8 million
Americans donate blood, less than 3% of Americans. So it's easy to see
how a few million more donors would assuage the problem. But the
education and marketing needed to achieve this end would be incredibly
expensive, slow and arduous to achieve. It’s hard to change that many
minds in a short time frame. Yet this comment from the same source gave us an avenue to progress with optimism: "If
all blood donors gave three times a year, blood
shortages would be a rare event. The current average is about
two." We agreed that this seemed like a much more achievable marketing strategy. In our team calls, <a href="https://www.sasin.edu/profile/roy-tomizawa" target="_blank">Roy Tomizawa</a> commented that we need to find something that makes people <i>want</i>
to be in the clinic environment beyond their existing personal motivations for helping
others. He suggested the concept of “comfortainment” as a strategy, whereby
people could combine their interest in movie or TV content with time they’d
sit still in the clinic for blood donation, dialysis or other medical
care. If we were to transform the clinic from its bright fluorescent-lit environment into a calm relaxing space, more people may wish to spend more time there.<br /></p><p>As a life-long donor, I've heard a lot of promotions to increase the frequency of donation while in clinic. But during intake so many things are happening. 1) FDA screening questions, 2) temperature check, 3) blood pressure measurement, 4) hemoglobin/iron test, 5) verbal confirmation of no smoking or vaping. This battery of activity is an awkward time for phlebotomists to insert promotional campaigns on increasing engagement. One day I noticed some donors were doing something different in the blood bank and I asked about it. Then I was informed how the blood <a href="https://www.yalemedicine.org/conditions/apheresis" target="_blank">apheresis</a> process differs from whole blood donation. It involves the use of a centrifuge device that can collect more of a specific component of blood product at time of draw from a single donor then returning the rest of the blood to the donor. Not only does this yield multiple individual units of blood per draw, the recovery time between donations is shorter. Whole blood donations require 2 months of time for the donor to replenish their blood naturally before another whole blood donation. Apheresis donors lose less of overall blood and can therefore return more often. The only downside of this is that it requires more time from the donor in-clinic.<br /></p><p>Because apheresis was the most flexible variable that blood banks could impact as demand and supply waxed and waned, our study zeroed in on optimizing this particular lever of supply to address the blood shortage. In a single blood draw via apheresis, a donor can provide 3 units of platelets, compared to whole blood draws. This allows the blood bank to supply three units immediately after draw to hospitals instead of having to use a centrifuge on post-donation pooled units of whole blood from multiple donors. Platelets are uniquely needed for certain hospital patients in the case of cancer patients or among those with blood clotting disorders. Regarding other blood components, an apheresis blood draw can provide 2 times more red blood cells than what would otherwise be donated as whole blood. At the same time that a donor is providing platelets, they may also provide plasma in the same draw, which provides <span class="ILfuVd" lang="en"><span class="hgKElc">leukocytes which can help patients with weakened immune systems by providing natural antibodies from healthy donors.</span></span> </p><p>Hearing all this you might think that everybody should be donating via apheresis. But the problem with it is the extra time needed, an additional hour of donor time at least. A donor planning to donate
for just a 15 minute blood draw may be reluctant to remain in apheresis
for one to two hours, even if it triples or quadruples the benefit of their donation. Though this is one factor that can be immediately augmented based on the local hospital demand, asking donors to make the trade off for the increased benefit can be a hard sell. </p><p>When I first tried apheresis, I didn’t enjoy it very much. But that’s because I don’t like lying down and staring at fluorescent lights for long periods of time. Lying on the gurney for 15 minutes is easy and bearable. Having phlebotomists try to persuade hundreds of people to change their donations to something much more inconvenient is a difficult challenge. Some blood banks offer post-donation coupons for movies or discounts on food and shopping to promote apheresis donations. My team wondered if we could we bring the movies <i>into the clinic</i> the way that airlines had introduced movies to assuage the hours of impatience people feel sitting on flights. Having people earn two hours of cinema time after donation by sitting still for two hours in clinic begs the question of why you couldn't combine the two together. Donors could watch IMAX films at the clinic when they'd plan to be immobile anyway!<br /><br />We interviewed other companies which had launched VR content businesses to help people manage stress, chronic pain or to discover places they may want to travel to while they're at home. We then proceeded to scope what it would take to create a device and media distribution company for blood banks to entice donors to come to the clinic more often and for longer stays with VR movies and puzzle games as the enticement. Introducing VR to apheresis draws doesn't create more work for phlebotomist staff. In fact one phlebotomist can draw several apheresis donations at once because the process provides an hour between needle placement and removal as idle time. So while we increase yield per donor, we also reduce the busywork of the phlebotomy team, introducing new cost efficiencies into the clinic processing time overall. <br /><br />Consumer grade VR headsets have now decreased in price to the level that it would be easy to give every donor an IMAX-like experience of a movie or TV show for every 2 hour donation. To test the potential for our proposed service, we conducted two surveys. We started with a survey of <a href="https://docs.google.com/forms/d/1CGY7UIfJaiCnEOD9Xvb1nYg8Go1GKZ-ttdZ6RRjZ-Kw/edit#responses" target="_blank">existing donors</a> to see if they would be more inclined to attend a clinic that offered VR as an option. (We were cautious not to introduce an element that would make people visit the clinic less.) We found that most existing donors wouldn’t be more-compelled to donate just because of the VR offering. They already have their own convictions to donate. Yet one quarter of respondents claimed they’d be more inclined to donate at a clinic where the option existed rather than a clinic that did not offer VR. The second survey was for people who <a href="https://drive.google.com/file/d/1U6v8V5EVuqnhtr4XYE3_MVtDSSngILWA/view?usp=sharing" target="_blank">hadn't donated</a> yet. There we heard significant interest in the VR enticement, specifically among a younger audience.</p><p>Fortunately, we were able to identify several other existing
potential collaborators which could make our media strategy easy to implement for
blood clinics. Specifically, we needed to find a way to address
sanitation of devices between use, for which we demoed the ultra-violet
disinfection chambers manufactured by Cleanbox Technologies. If donors were to wear a head mounted display, they would need to make sure that any device that was introduced to a clinical setting had been cleaned between uses. Cleanbox is able to meet the 99.99% device sterilization standard required for use in hospitals, making them the best solution for a blood clinic introducing VR to their comfortainment strategies.<br /></p><p></p><p>Second, in order for the headsets to have regular updates and telemetry software checks, we talked to ArborXR which would allow a fleet of deployed headsets to be updated overnight through a secure update. This would take device maintenance concerns away from the medical staff onsite as well. Devices being sterilized, charged and updated overnight while they weren’t in use could facilitate a simple deployment alongside the apheresis devices already supplied to hospitals and blood banks through medical device distributors, or as a subsequent add-on. </p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhEAoMF1fQEXPk8EebHnSv9We4RYv615P-Lg274a3YLM5imwd7Kga9hvrNzDIjImaBuUNsPQi3go7YW9x0wlsqh6CIv7PeqYhEBU3D8FNec2s6Jhg-_jsQLNQEwMu5ZmhjU-raI9GVgGd3n6tVEd_QlmTCSqi6iZtAoGhR8DMAr1d6smjsml2vMv0zWBr4/s3088/VitureVitalantImage.jpeg" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="2316" data-original-width="3088" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhEAoMF1fQEXPk8EebHnSv9We4RYv615P-Lg274a3YLM5imwd7Kga9hvrNzDIjImaBuUNsPQi3go7YW9x0wlsqh6CIv7PeqYhEBU3D8FNec2s6Jhg-_jsQLNQEwMu5ZmhjU-raI9GVgGd3n6tVEd_QlmTCSqi6iZtAoGhR8DMAr1d6smjsml2vMv0zWBr4/s320/VitureVitalantImage.jpeg" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><b><span style="font-size: x-small;">Using the Viture AR glasses at an apheresis donation</span></b><br /></td></tr></tbody></table><p>While we hope that our study persuades some blood banks to introduce comfortainment strategies to reward their donors for their time spent in clinic, I’ve firmly convinced myself that this is the way to go. I now donate multiple times a year because I have something enjoyable to partake in while I’m sharing my health with others.<br /><br />I’d like to thank my collaborators on this project, Roy Tomizawa, Chris Ceresini, Abigail Sporer, Venu Vadlamudi and Daniel Sapkaroski for their insights and work to explore this investment case and business model together. If you are interested in hearing about options for implementing VR comfortainment or VR education projects in your clinic or hospital, please let us know.</p><p> </p><p>For our service promotion video we created the following pitch which focuses on benefits the media services approach brings to blood clinics, dialysis clinics and chemotherapy infusion services. </p><div class="separator" style="clear: both; text-align: center;"><br /></div><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen="" class="BLOG_video_class" height="526" src="https://www.youtube.com/embed/WQeUMEBSIBY" width="633" youtube-src-id="WQeUMEBSIBY"></iframe></div><p></p><div class="separator" style="clear: both; text-align: center;"><br /></div><p></p><p></p><p>Special thanks to the following companies for their contribution to our research:</p><p><a href="https://quantic.edu/employer-network/our-companies/" target="_blank">Quantic School of Business & Technology</a> </p><p><a href="http://vitalant.org/" target="_blank">Vitalant Blood Centers</a><br /></p><p><a href="https://www.tripp.com/" target="_blank">Tripp VR</a></p><p><a href="https://cleanboxtech.com/" target="_blank">Cleanbox Technologies</a></p><p><a href="https://www.vivavita.org/" target="_blank">Viva Vita</a></p><p><a href="https://www.abbott.com/" target="_blank">Abbott Labs </a> <br /></p><p><a href="https://ivrha.org/" target="_blank">International VR & Healthcare Association</a></p><p><a href="https://www.thevrara.com/" target="_blank">VR/AR Association</a><br /></p><a href="https://www.awexr.com/" target="_blank">Augmented World Expo</a><p></p>ncubeeighthttp://www.blogger.com/profile/07874174268038500927noreply@blogger.com0tag:blogger.com,1999:blog-4288746931605915378.post-85297749592622529282023-03-19T18:22:00.019-07:002023-03-19T20:34:02.614-07:00The evolution of VR spaces and experiences<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_JUisOYOGpj3D2zpb1WxjsNKqvpPcpQzf1zX0HVyTxJPiTMKOrwMfdGhXgA3Tfsm-8A2uQZ4hHeIYUGa5nxX1729fspq5WFPZrSjjLDnnmgWebuxGEsKNsBpUvp5gi52dTk4GZrV1-D2LrTg_BHb9CVbJ5L8_3x0Rc6kqxtiAKJxdV62oBBxVB8nb/s1331/Felix%20&%20Paul%20Space%20Explorers.png" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;" target="_blank"><img border="0" data-original-height="680" data-original-width="1331" height="163" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_JUisOYOGpj3D2zpb1WxjsNKqvpPcpQzf1zX0HVyTxJPiTMKOrwMfdGhXgA3Tfsm-8A2uQZ4hHeIYUGa5nxX1729fspq5WFPZrSjjLDnnmgWebuxGEsKNsBpUvp5gi52dTk4GZrV1-D2LrTg_BHb9CVbJ5L8_3x0Rc6kqxtiAKJxdV62oBBxVB8nb/w320-h163/Felix%20&%20Paul%20Space%20Explorers.png" width="320" /></a></div>Six years ago, Meta launched the first consumer version of its VR headset, the Oculus Rift CV1. I had my first experience of that new media interface at San Francisco's Game Developer Conference (GDC). Oculus technicians escorted me into a sound-proof dark room and outfited me with the headset attached to an overhead boom that would keep the wires out of my way as I experienced free motion simulated environments that were crafted in Epic's Unreal Engine world-building game architecture. (This is the same developer environment that was used to create <a href="https://www.youtube.com/watch?v=Ufp8weYYDE8" target="_blank">The Mandalorian</a> TV series.) The memory of that demonstration is strong to this day because it was such a new paradigm of media experience. As I moved in a simulated world, parallax depth of distant objects shifted differently relative to those objects near. Everything appeared a bit like a cartoon, more colorful than the real world. But the sense of my presence in that world was incredibly compelling and otherwise realistic.<p></p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQrGF-j9K3eDwbJT8vkZ9YbeAYOCDdcmhVaGoDkgzLD2zpK1Zk8nUaSepaN2bmgB7OTiJE1aWNMjh1BxSkW_Om62-LMKK2Glmw8GOORSCP6wg4dJUXrI7BsVwXBI3fGeHSetS7cNlZnMyJ3gwRpaVTxfDLNVGd9xro2rGk4zkcbQVkHtoBf8lTTkJQ/s1739/ISS%20Floor%20Map.jpg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="1739" data-original-width="1228" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQrGF-j9K3eDwbJT8vkZ9YbeAYOCDdcmhVaGoDkgzLD2zpK1Zk8nUaSepaN2bmgB7OTiJE1aWNMjh1BxSkW_Om62-LMKK2Glmw8GOORSCP6wg4dJUXrI7BsVwXBI3fGeHSetS7cNlZnMyJ3gwRpaVTxfDLNVGd9xro2rGk4zkcbQVkHtoBf8lTTkJQ/w141-h200/ISS%20Floor%20Map.jpg" width="141" /></a></div>Yesterday I went into a physical VR gym in Richmond, California with a dozen other people to try a simulated journey where we would physically <i>walk</i> on a virtual replica of the International Space Station. It was profound to reflect on how much the technology has advanced in the six years since my first simulated solitary spacewalk at the GDC. The hosts of the event walked us through a gradual orientation narrative like were were astronauts ascending the top of an Apollo era launch tower before we were set free to roam on the purely visual ISS along with brief video greetings from real astronauts, previously filmed, at the exact locations marked by green dots on the map to the right. When we approached the astronauts, glowing orbs showed camera positions that were filmed on the ISS previously. By standing right where the astronauts were in the filming we could see all the equipment and experience what it was like to live on the ISS for those astronauts.<div><br /></div><div>In a recent interview with Wall Street Journal reporters, Philip Rosedale (the founder of Linden Labs) commented, "The appeal of VR is limited to those people who are comfortable putting on a blindfold and going into a space where other people may be present." Here I was, actually doing that in a crowd of people I had never met before. All I could see of those people was a ghostly image of their bodies and hand positions with their gold/blue/green heart beacon indicating their role as fellow VR astronauts, family members for those in a group, or the event staff who kept an eye out for anyone having hardware or disorientation issues with the VR environment. Aside from an overheating headset warning and a couple of times the spatial positioning lost sync with the walls of the spaceship, I didn't have any particular issues. It was very compelling!<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh5IllXftdHQoEdQo3RqiUjAGLf4Q9POd4Aw3AnkKsOeJn-OdqO3EtiV7Wg9vtNS3iaaFH1AHaX6qlFRtjaOav4ymmNCwikPgdS4Qq-QfvqEaJhiqFMpGHnEiRxGHuiyJofZA-15mmVOgGqb-nFrqcQMJkj3yPGR1kOzGQ4D82oi_pW_GfhG3ca1iPM/s2227/InfiniteVR.jpg" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="1255" data-original-width="2227" height="225" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh5IllXftdHQoEdQo3RqiUjAGLf4Q9POd4Aw3AnkKsOeJn-OdqO3EtiV7Wg9vtNS3iaaFH1AHaX6qlFRtjaOav4ymmNCwikPgdS4Qq-QfvqEaJhiqFMpGHnEiRxGHuiyJofZA-15mmVOgGqb-nFrqcQMJkj3yPGR1kOzGQ4D82oi_pW_GfhG3ca1iPM/w400-h225/InfiniteVR.jpg" width="400" /></a></div>Six years ago at GDC, I remember a clever retort a developer shared with me at the unveiling of the Rift CV1. While waiting in line at the demo booth, I asked what he thought about nascent VR technology. He said, “Oh, I think it will be like the xBox Kinekt. At first, nobody will have one and everyone will want one. Then, later, everyone will have one and nobody will want one!” Now, years later, we can look back in retrospect to see what happened. VR didn't reach a very broad market penetration yet because of rather high price of hardware. But when the pandemic shuttered the outside world to us temporarily, many of us took to virtual workrooms to meet, socialize and work. Meta was well positioned for this. Zoom conference calls felt like flashbacks to the Brady Bunch/Hollywood Squares grid of tic tac toe faces. Zoom felt oddly isolating in contrast to sharing spaces with people physically. Peering into people’s homes also seemed a little disturbing. Several engineers and product managers I frequently meet with suggested we switch to VR instead. One of them challenged me to give a lecture in VR. So I researched how Oxford University was doing VR lectures in EngageVR and conducted my own lecture on the history of haptic consumer technology in an EngageVR lecture room. It was challenging at the time getting around lecture slide navigation and simultaneously controlling my spatial experience of appearing as a lecturer in the classroom. But I succeeded in navigating the rough edges of the early platform limitations. (EngageVR has drastically improved since then, introducing customizable galleries and broader support of imported media assets.)<p>
While the experience felt rough at first, I found it much more compelling than using shared slides and grid camera views of the Zoom conference call format. So my colleagues collaborated with me to create a bespoke conference room where we could import dozens of lecture resources, videos, pdfs and 3D images. In this conference room a large group could assemble and converse in a more human-like way than staring into a computer camera. While we gave up the laptop camera with its tag-team game of microphone hand-off, we took up using VR visors where we could see everybody at once, oriented around us in a circle. Participants could mill around the room and study different exhibits from previous discussions while others of us were engrossed in the topic of the day.
</p><p>
I know that people like us are rather atypical because we adopt technology long before the mainstream consumer. But the interesting thing is that years later, even with the pandemic isolation waning, we all still prefer to convene in our virtual conference spaces! It typically comes down to two choices of where we convene. If it’s a large group, we assemble in the lecture hall hosted on Spatial’s web servers. These are fast paced and scintillating group debates where we have to coordinate speakers by hand waving or following auditory cues of interjecting speakers. If it’s four people or less, we use EngageVR or VTime, which allow for a more intimate discussion. Those platforms have us use virtual avatars that, unlike Spatial, don’t resemble our physical bodies or faces. But the microphone handoff of the dialog is very easy to hear natural language auditory cues of speakers.</p><p>
“Why does this simulated space feel more personal than the locked-gaze experience of a Zoom call?” I wondered. My thought is that people speak differently when they are being stared at (camera or otherwise) than when they have free moving gaze and a sense of personal space. Long ago I heard an interview with NPR radio show host, Terry Gross. She said that she never interviewed her guests on camera, as she preferred to listen closely only to their voice. Could this be the reason the virtual conference room feels more personal than the video conference?
</p><p>
During my years studying psychology, I remembered the idea of Neuro-Linguistic Programming in which author Richard Bandler lectured that the motion of eyes allows us to access and express different emotions that are tied to how we remember ideas and pictorial memories. In NLP’s therapeutic uses, a therapist can understand traumatic memories discussed in the process of therapy based on how people express with their eyes and bodies during memory recollection. Does freedom from camera-gaze permit better psychological freedom in the VR context perhaps?
</p><p>
In lectures and essays by early VR pioneers, I kept hearing references to people preferring the virtualized environment among those who identified as neurodivergent. In my early study of autism spectrum disorder, I had read that one theory if ASD is an over-reaction to sensory stimuli. Often people who have ASD may avoid eye contact due to the intensity of social interaction. In casual contexts this behavior can be interpreted as an expression of disinterest or dislike. Perhaps virtualized presence in VR can address this issue of overstimulation, allowing the participants to have a pared down environmental context. In an intentionally-fabricated space, everything there is present by design.</p><p>
I still don’t think the trough of the hype cycle is upon us for VR. (Considering the perspective of my developer friend’s theory about the land of VR disenchantment.) First, VR is still too expensive for most people to experience a robust VR setup. The "Infinite" ISS exhibit costs considerably more than watching a IMAX movie, its nearest rival medium. Yet soon Samsung, ByteDance, Pimax and Xiaomi are coming to market with new VR headsets that will drive down the cost of access and give most of the general public a chance to try it. I'm curious to see when we will get to that point of "everybody having it and nobody wanting it." I still find myself preferring the the new media social interactions because they approximate proximity and real human behavior better than Zoom, even if they still have a layer of obvious artificiality. </p><p>A funny thing is that I have a particular proclivity to preferring visits to space for my VR social sessions. After my GDC experience years ago I downloaded the <a href="https://www.bbc.co.uk/taster/pilots/home-a-vr-spacewalk" target="_blank">BBC's Home</a> VR app that simulates a semi-passive perspective of an astronaut conducting a space walk. This allowed me to relive my GDC experience, with a surprise twist involving space debris. Then I tried the <a href="https://www.oculus.com/experiences/rift/1178419975552187/" target="_blank">Mission ISS</a> walk-through VR app that gives users a simulated experience of floating around inside a realistic looking simulated ISS assembled from NASA photographs of the station. Then, when Meta announced its new Horizon Venues platform, I was able to go into a virtual IMAX theater with a gigantic half-dome theater rendered in front of hundreds of real-time avatars of people from around the world to watch 360 videos from taken from the ISS and produced for redistribution by <a href="https://theinfiniteexperience.world/" target="_blank">Felix & Paul Studios</a>. And finally this week I was able to visit the <a href="https://www.felixandpaul.com/?infinity" target="_blank">Phi Studios</a> physical walk-through. What I like about this progression is that the experience became more and more social. Getting away from the feeling of the movie <a href="https://en.wikipedia.org/wiki/Gravity_(2013_film)" target="_blank">Gravity</a>, of being isolated in space.<br /></p><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiL1CgtJvXOFeXO9avQDpAuXYgSmrbVA-MnxfEIcDLgbE33wehzVsFi5AgX-pyNVJoidIW-As8pwidoGIMr5mn-KSnLLO5RRk5rcN9Zq7WBy851Tg7AS5zowsu9N4DAXsDMz0DENa-kl3oeJE4qn2p0C9B6t2cIMTsCxfWPI_32SmyuJvdM2KV0v2G3/s2320/Vtime%20in%20Space.png" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="1986" data-original-width="2320" height="343" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiL1CgtJvXOFeXO9avQDpAuXYgSmrbVA-MnxfEIcDLgbE33wehzVsFi5AgX-pyNVJoidIW-As8pwidoGIMr5mn-KSnLLO5RRk5rcN9Zq7WBy851Tg7AS5zowsu9N4DAXsDMz0DENa-kl3oeJE4qn2p0C9B6t2cIMTsCxfWPI_32SmyuJvdM2KV0v2G3/w400-h343/Vtime%20in%20Space.png" width="400" /></a></div>Yet, for the most social experience of all, my friends and I like to go to an artificial simulated space station hovering 250 miles above earth where we can sit and have idle conversations as a realistic-looking model of Earth spins beneath us. This is powered by a social app called <a href="https://vtime.net/" target="_blank">Vtime</a>. When I go here with my colleagues, we inevitably end up talking about the countries we're orbiting over and relating experiences that are outside of our day to day lives. Perhaps it takes that sense of being so far removed from the hum drum daily environment to let the mind wander to topics spanning the globe and outside the narrow confines of our daily concerns. In one such conversation, my friend Olivier and I got into a long discussion about the history and culture of Mauritius, his home country, over which we were then flying. Vtime's Space Station location only has 4 chairs for attendees to sit in at a time. So we use this for small group discussions only. If you ever get inspired to try VR with your friends, I recommend trying this venue for your team discussions. It's hard to say what is so compelling about this experience in contrast to gazing at people's eyes in a video conference. But even after the pandemic lockdown subsided and we could once again meet in person, I still find myself drawn back to this simulated environment. I believe when every one of us has access to this, we will come to prefer it for remote-meetings in lieu of the past decades' 2D panel plus camera.<br /></div><div><br /></div><div><br /></div><div><br /></div><p><br /></p><p></p></div>ncubeeighthttp://www.blogger.com/profile/07874174268038500927noreply@blogger.com0tag:blogger.com,1999:blog-4288746931605915378.post-16589442894957122262022-10-02T09:54:00.035-07:002023-03-18T19:29:21.297-07:00Coding computers with sign language<p>I am one of those people who searches slightly outside the
parameters of the near term actual with an eye toward the long term
feasible, for the purpose of innovation and curiosity. I'm not a
futurist, but a probable-ist, looking for the ways we can leverage the
technologies and tools we have at our fingertips today to achieve adjacent
potential opportunities leveraging those tools. There are millions of people at any
time thinking how to address new applications of any specific technology
in novel ways to push the technological capabilities toward
exciting new utilities. We often invent the same things using different techniques, the way that eyes and wings evolved via separate paths in nature, called convergent evolution. I remember going to a Google developer event in 2010 and heard the company announce a product that described my company's initiative down to every granular detail. At the time I wondered if someone in my company had jumped the fence. But I then realized that our problems and challenges are common. It's only the approaches to address them and the resources we have that are unique.<br /></p><p>When I embarked into app development during the
launch of the iPhone, I knew we were in a massive paradigm shift. I became captivated with the potential that we could use
camera interfaces as inputs to control the actions of computers. We use web cameras to send messages person to person over the web. But we could also communicate commands directly into code as well if we leverage an interpretive layer to communicate as the computer interprets.</p><p>This fascination with the potential future started when I was working with the first release of the iPad. My
developer friends were toying around with what we could do to extend the
utility of the new device beyond the bundled apps. At the time, I used a
Bluetooth keyboard to type, as speech APIs were crude and not yet
interfacing well with the new device and because the in-screen keyboard was difficult to use. One pesky thing I realized was that
there was no mouse to communicate with the device. Apple permitted keyboards to pair, but they didn't support the pairing of a Bluetooth mouse. Every
time I had to place the cursor, I had to touch the iPad, and it would
flop over unless I took it in my hands. </p><p>I wanted to use it as an abstracted
interface, and didn't like the idea that the screen I was meant to read through would get fingerprints on it unless I bought a pen to touch the screen with. I was acting in an old-school way wanting to port my past computer interaction model to a new device while Apple wanted iPad to be a
tactile device at the time, seeking to shift user expectations. I wanted my device to adapt to me rather than
having me adapt to it. "Why can't I just gesture to the camera instead of touching
the screen?" I wondered. </p><p>People say necessity is the mother of invention. I
often think that impatience has sired as many inventions as necessity.
In 2010 I started going to developer events to scope out use cases of real-time camera input. This kind of thing is now referred to as "augmented
reality" where the interaction of a computer overlays some aspect of our interaction with the world outside the computer itself. At one of these events, I met an
inspirational computer vision engineer named <a href="https://www.linkedin.com/in/nicolarohrseitz/" target="_blank">Nicola Rohrseitz</a>. I
told him of my thoughts that we should have a touchless-mouse input for
devices that had a camera. He was thinking along the same lines. His wife played stringed instruments. Viola and cello players have trouble
turning pages of sheet music or touching the screen of an iPad because
their hands are full as they play! So gesturing with a foot or a wave
was easier. A gesture could be captured by tracking motion rendered through light color shifts on pixel locations of the camera chip. He was able to track the shift of pixel color locally on the device and render that as input to an action on the iPad. He wasn't tracking the hand/foot directly, he was post-process analyzing the images after they were written into random access memory (RAM). By doing this on
device, without sending the camera data to a web server, you avoid any
kind of privacy risk of a remote connection. So having the iPad think about what it was seeing, it could interpret the input as a command and thereafter turn the page of sheet music on his wife's iPad. He built an app to to achieve this for his wife's use. And she was happy. But it had much broader implications for other actions.</p><p>What else could be done with signals
"interpreted" from the camera beyond hand waves we wondered? Sign language was the obvious one. We realized at the time that the challenge was too complex then because sign language isn't static shape capture. Though ASL alphabet are static hand shapes, most linguistic concept signs have a hand position shifting a certain direction over a period of time. We couldn't have achieved this without first achieving figure/ground isolation. The iPad camera at that time did not have a means for depth perception. Now a decade later, Apple has introduced HEIC image capture (a more advanced image compression format than JPEG) with LIDAR/depth information that can save layers of the image available, much like the idea of multiple filter layers in a Photoshop file. <br /></p><p>Because we didn't have figure/ground isolation, Nicola created a generic gesture motion detection utility which we applied to allow
users to play video games on a paired device by use of hand motions and tilting rather than pushing
buttons on a screen. We decided it would be fun to adapt the tools for
distribution with game developers. Alas, we were too early with this
particular initiative. I pitched the concept to one of the game studios in the San Francisco Bay Area. While they said the game play concept looked fun, they said politely that
there have to be a lot more mobile gamers before there would be demand
among those gamers to play in a further augmented way with gesture
capture. The iPad had only recently come out. There just wasn't any significant market for our companion app just yet. <br /></p><p>Early attempts to infer machine models of human or vehicle motions would overlay an assumed shape of a body over a perceived entity in the camera's view. In a depiction of a video intake of a driving car, it might be inferred that every object in the field of view represented by a moving object is a car. (So being a pedestrian or biker in the proximity of self-driving cars became risky as object and behavior assumptions of the seeing entity predicted different behaviors than pedestrians and bicyclists exhibited.) In a conference demo on an expo floor, it is likely that most of what the camera sees are people, not cars. So the algorithm can be set to infer body position represented by the assumed skeletal overlays of legs related to bodies, and presumed eyes atop bodies. The purpose of this program pictured below was to be used in shop windows to notice when someone was captivated by the displayed items in the window. For humans near, eyes and position of arms were accurately projected. For humans far away, less so. (The Computer Electronics Show demo did not capture any photographs
of the people moving in front of the camera. I captured that separately with my camera.) </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgdlPLjaAID7Cit0_AkhuFirm5vy9p0dJKUUXFVjDrY_66SBT0XP247v5LCL7ApuNkiLPxNibG8UnoGS-_JzRyE7mXnIMQIhuxx2nAUBW29poBhxMUY9VzW9s5AoP56lon5H0wDlmcFwx88tu1WSzJ2dH-gMakCDYSOhqvuLDhFzJ40qPlJONBfcZHu/s1205/Early%20Computer%20Vision%20Analysis.jpeg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="829" data-original-width="1205" height="440" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgdlPLjaAID7Cit0_AkhuFirm5vy9p0dJKUUXFVjDrY_66SBT0XP247v5LCL7ApuNkiLPxNibG8UnoGS-_JzRyE7mXnIMQIhuxx2nAUBW29poBhxMUY9VzW9s5AoP56lon5H0wDlmcFwx88tu1WSzJ2dH-gMakCDYSOhqvuLDhFzJ40qPlJONBfcZHu/w640-h440/Early%20Computer%20Vision%20Analysis.jpeg" width="640" /></a></div><p>Over
the ensuing years, other exciting advancements brought the capture of
hand gestures to the mainstream. With the emergence of VR developer
platforms, the need for alternate input methods became even more
critical than the early tablet days. With the conventional technique of wearing
head-mounted-displays (HMDs) and glasses, it became quite obvious that
conventional input methods like keyboard and mouse were going to be too
cumbersome to render in the display view. So rather than trying to
simulate a mouse and keyboard in this display, a team of developers at
LeapMotion took the approach of utilizing an infrared camera which could
detect hand position, then infer knuckle and joint positions of the
hands which could in turn be rendered as input methods to any operating
system to figure out what they hands were signaling for the OS to do at the same time as they were projected into the head-mounted-display.
(Example gesture captures could be mapped to commands for grabbing objects, gesturing for menu options, etc.) </p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgtgz-SXHmcqMZ-S8mWywyIxXIdclSFfkgAAin6mS4Bdhj62K0Bm7_j8Nr-frzbgsqfYA1fphPF7yu-6IHb2_MPQsEpH2ZTVMbNBwH1c72O5zeholc4cxU0sG7GossIdSUHpKywWZ0ksT-2P1Ul9lJat_iFu7YfXBLz_GZshJPOLqMDPXw82A_JK4AR/s3264/LeapMotionAtWebGLMeetup.JPG" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="2448" data-original-width="3264" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgtgz-SXHmcqMZ-S8mWywyIxXIdclSFfkgAAin6mS4Bdhj62K0Bm7_j8Nr-frzbgsqfYA1fphPF7yu-6IHb2_MPQsEpH2ZTVMbNBwH1c72O5zeholc4cxU0sG7GossIdSUHpKywWZ0ksT-2P1Ul9lJat_iFu7YfXBLz_GZshJPOLqMDPXw82A_JK4AR/w640-h480/LeapMotionAtWebGLMeetup.JPG" width="640" /></a></div><p></p><p></p><p></p><p></p><p>The views above are my hands detected by infrared from a camera sitting below the computer screen in front of me, then passed into the OS view on the screen, or into a VR HMD. The joint and knuckle positions are inferences based on a model inside the OS-hosted software. The disadvantage of LeapMotion
was that it required an infrared camera to be set up and for
some additional interfacing challenges through the OS to the program
leveraging the input. But the good news was that OS and hardware
developers noticed and could pick up where LeapMotion left off to
bring on these app-specific benefits to all users of next
generation devices. Another five years of progress and the application of the same
technology in Quest removes the x-ray style view of the former approach
with something you can almost infer as realistic presence of one's own
hands. <br /></p><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZWPhnv5FX-Zko-sp_3eurvHtWGIPjkgQsNjusMGqbteSs7ncL2ktoWcYt4dLjEF20RW39x4ceM9vxDk7TfCkfXTNZEOdrz1-c5-PvEn6MwvszCT29UFv6ga_OvWVUQjYkrDi2BY3b3yC-IE-wMn1M7RSjYO6eEtA3x7rIcBMbI3Ss6NhuGceegloV/s4282/HandsMenus.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1372" data-original-width="4282" height="206" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZWPhnv5FX-Zko-sp_3eurvHtWGIPjkgQsNjusMGqbteSs7ncL2ktoWcYt4dLjEF20RW39x4ceM9vxDk7TfCkfXTNZEOdrz1-c5-PvEn6MwvszCT29UFv6ga_OvWVUQjYkrDi2BY3b3yC-IE-wMn1M7RSjYO6eEtA3x7rIcBMbI3Ss6NhuGceegloV/w640-h206/HandsMenus.png" width="640" /></a></div></div><div> </div><div>Hololens and Quest thereafter merged the former external hardware camera into the HMD directly facing forward. This could then send gesture commands from the camera inputs to all native applications on the device, obviating the need for app developers to toil with the interpretive layer of joint detection inside their own programs. In the Quest platform, app
developer adoption of those inputs is slow at present. But for those that do support it, you can use
"Hands API" to navigate main menu options and high-level app selection.
A few
apps like Spatial.io (pictured above) take the input method of the Hands API and allow
the use of the
inferred hand position to replace the role formerly filled by hardware
controllers for Spatial content and motility actions. Becacuse Spatial is a hosted virtual world platform, the Hands API offers the user a capability to navigate within the 3D
space through more direct hand signals. This lets the user operate in the environment with their hands in a way resembling digital semaphore. Like Spider Man's web-casting wrist gesture, a certain motion will teleport
the user to a different coordinate in the virtually-depicted 3D space.
Pinching fingers allows command menus to come up. Hovering over an
option and letting go of the pinch selects the desired input command.
The entire menu of the Spatial app can be navigated with hand signals
much like the Spielberg film <a href="https://www.youtube.com/watch?v=PJqbivkm0Ms" target="_blank">Minority Report'</a>s futuristic computer
interfaces. It takes a bit of confused experimentation before the user's neuro-plasticity rewires the understanding of the new input method. (The same way learning abstract motions of the mouse cursor or game-pad controls require a short acclimatization period.)<br /></div><div> </div><div>This is great advancement for the minority of people reported to be putting HMDs on their heads to use their computers. But what about the rest of us who don't want to have visors on our noggins? For those users also we can anticipate computer input from our motions in front of the machine of our choice too, very soon. Already, the backward-facing camera in iOS devices detects full facial structure of the user. The depth vision of that camera enables mirroring of the shape of our facial features such that it can be used in a similar way that old skeleton keys precisely matched the internal workings of bolt locks. Simulating the precise shape of your face, plus the pupil detection of your eyes looking at the screen, is trustworthy indication that you are awake and presently expecting your phone to awaken as well. Pointing my camera at a photo of me doesn't unlock the phone, nor would someone pointing my phone at me while I'm not looking at it. As a fun demonstration of this capability, new emoji packs called "<a href="https://support.apple.com/en-us/HT208986" target="_blank">memoji</a>" allow you to enliven a cartoon image of your selection with the CGI animation by mirroring your facial gestures. Cinematographers have previously used body tracking to enable such animation for films including <a href="https://www.youtube.com/watch?v=w_Z7YUyCEGE" target="_blank">Lord of the Rings </a>and <a href="https://www.youtube.com/watch?v=4NU9ikjqjC0" target="_blank">Planet of the Apes</a>. Now everybody can do the same thing with position mirroring models hosted in their phones.</div><p>The next great leap of utility for cross-computer communication as well as computer programming will be enabling the understanding of other human communication beyond what our faces and mouths express. People video-conferencing use body language and gesture through the digital pipelines of our web cameras. Might gestural interactions be brought to all computers allowing conveyance of intent and meaning to the OS for command inputs? </p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://www.amazon.com/Gerard-Aflague-Collection-American-Language/dp/B076JTXSYG" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;" target="_blank"><img border="0" data-original-height="616" data-original-width="467" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiT9jFPVaEZdv-lFbTx5KWUkDtV7J6kvTQl9-oCl3m0-xbaE0hydOAxhzLu29HWKKc-NcQwtxR_fq-6lIxp4VwRtAAwCevic-uYuYoXRr5NU2rA_mKnnEBguQhGjV_i3rU79kDx8q6sXSZTpKqu2IPjW_60js1zpyh2vzSVRTYpcw8GBV7knfXjSQM_/w152-h200/ASL.png" width="152" /></a></div>At a recent worldwide developer convention, Apple engineers demonstrated a concept of using machine pattern recognition to simulate gestural input commands to the operating system extending and expanding the approach from the infrared camera technique. Apple's approach uses a set of training images stored locally on the device to infer input meaning. The method of barcode and symbol recognition with the <a href="https://ncubeeight.blogspot.com/2022/09/" target="_blank">Vision API</a> pairs a camera-matched input to a reference database. The matching database can of course be a web query to a large existing external database. But for a relatively small batch of linguistic pattern symbols such as American Sign Language, a collection of reference gestures can be hosted within the device memory and paired with the inferred meaning the user intends to convey for immediate local interpretation without a call to an external web server. (This is beneficial for security and privacy reasons.)<br /><p></p><p>In Apple's demonstration <a href="https://developer.apple.com/videos/play/wwdc2021/10039/" target="_blank">below</a>, Geppy Parziale has the embedded computer vision capability of the operating system to isolate the motion of two hands separate from the face and body. In this example he tracked the gesture of his right hand separately from the left hand making the gesture for "2." Now that mobile phones have figure/ground isolation and the ability to isolate portions of the input image into segments, enormously complex gestural sign language semiotics can be achieved in ways that Nicola and I envisioned a decade prior. The rudiments of interpretation via camera input can now represent the shift of meaning over time that forms the semiotics of complex human gestural expression.<br /></p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1PV7W2R5x7LI_pXym_OHoeUlWuYVSgpnl0mWlbdrUGnMdLAy4aSYgiK1itnqO5VdamuEThVhe-PBudjHV8O3RDnyeQpF3q0bQq0Q2XYWbMkVnNmUgXOc_iAS_x1kYmZNSoa_jXfyCOPbSY9f9TYOCS_EYdFocHUcSeZHWbDdnCBNsDxVgGR0QTECx/s498/Hand%20Pose%20Detection3.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="394" data-original-width="498" height="506" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1PV7W2R5x7LI_pXym_OHoeUlWuYVSgpnl0mWlbdrUGnMdLAy4aSYgiK1itnqO5VdamuEThVhe-PBudjHV8O3RDnyeQpF3q0bQq0Q2XYWbMkVnNmUgXOc_iAS_x1kYmZNSoa_jXfyCOPbSY9f9TYOCS_EYdFocHUcSeZHWbDdnCBNsDxVgGR0QTECx/w640-h506/Hand%20Pose%20Detection3.png" width="640" /></a></div> <p></p><p>I remember in high school going to my public library and plugging myself into a computer, via a QWERTY keyboard, to try to learn the language that computers expect us to comprehend. But with these fascinating new transitions in our technology, future generations may be able to "speak human" and "gesture human" to computers instead of having us spend years of our lives adapting to them! </p><p>My gratitude, kudos and hats off to all the diligent engineers and investors who are contributing to this new capability in our technical platforms. <br /></p><p> </p>ncubeeighthttp://www.blogger.com/profile/07874174268038500927noreply@blogger.com0tag:blogger.com,1999:blog-4288746931605915378.post-75963086606099622182022-09-09T10:46:00.001-07:002022-09-09T17:03:32.570-07:00Looking it up with computer vision <p>My mother introduced me to a wide range of topics when I was growing up. She had fascinations with botany, ornithology, entomology and paleontology, among the so-called hard sciences. As she was a teacher, she had adapted certain behaviors she'd learned from studying child development and psychology in her masters degree program on the best way to help a young mind to learn without just teaching <i>at</i> them. One of her greatest mantras from my childhood was "Let's look it up!" Naturally she probably already knew the Latin name for the plant, animal or rock I was asking about. But rather than just telling me, which would make me come to her again next time, she taught me to always be seeking the answers to questions on my own. </p><p>This habit of always-be-looking-things-up proved a valuable skill when it came to learning languages beyond Latin terms. I would seek out new mysteries and complex problems everywhere I went. When I traveled through lands with complex written scripts that were different from English, I was fascinated to learn the etymologies of words and the way that languages were shaped. Chinese/Japanese script became a particularly deep well that has rewarded me with years of fascinating study. Chinese pictographs are images that represent objects and narrative themes in shape representation rather than in sound, much like the gestures of sign language. I'd read that pictographic languages are considered right brain dominant because understanding them depends of pattern recognition rather than decryption of alphabetic syllables and names which are typically processed in the left brain. I had long been fascinated by psychology, so I thought that learning a right brain language would give me an interesting new avenue to conceive language differently and potentially thereby think in new ways. It didn't ultimately change me that much. But it did give me a fascinating depth of perspective into new cultures.</p><p>Japanese study became easier by degrees. The more characters I recognized, the faster the network of comprehensible compound words accelerated. The complexity of learning Japanese as a non-native had to do with the idea of representing language by brush strokes instead of phonemes. To look up a word you don't know how to pronounce, you must look up a particular shape within the broader character, called a radical. You then look through a list of potential matches by total brush stroke count that contain that specific radical. It takes a while to get used to. I'd started, while living in Japan with the paper dictionary look-up process, which is like using a slide rule to zero in on the character which can then be researched elsewhere. Computer manufacturers have invented calculator-like dictionaries that sped up the process of search by radical. Still it typically took me 40-60 seconds with a kanji computer to identify a random character I'd seen for the first time. That's not so convenient when you're walking around outside in Tokyo. So I got in the habit of photographing characters for future reference when I had the time for the somewhat tedious process.</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgCVG5ksvyTr3MmdsL_ZNSUA2fZKgajdttQJwufBoUiTkiP00qNwfAT4qXX7XmVYbaAB92YexdrRJsBe9bm6l_BUqvy9rjZtTmWUwIFP3uYStxiJXXNwLr3wh9ZZCSys3zXE69s89uBRUh5UzfPC-iFBn7_THSLf9swryEklsqh-jvAz9B0dDtt2oI0/s2007/KanjiScan.PNG" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="2007" data-original-width="1166" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgCVG5ksvyTr3MmdsL_ZNSUA2fZKgajdttQJwufBoUiTkiP00qNwfAT4qXX7XmVYbaAB92YexdrRJsBe9bm6l_BUqvy9rjZtTmWUwIFP3uYStxiJXXNwLr3wh9ZZCSys3zXE69s89uBRUh5UzfPC-iFBn7_THSLf9swryEklsqh-jvAz9B0dDtt2oI0/w116-h200/KanjiScan.PNG" width="116" /></a></div>Last month I was reviewing some vocabulary on my phone, when I noticed that Apple had introduced optical-character-recognition (OCR) into the operating system of new iPhones. OCR is a process that's been around for years for large desktop computers with supplemental expensive software. But having this at my fingertips made the lookup of kanji characters very swift. I could read any text through a camera capture and copy it into my favorite kanji dictionaries (<a href="http://jisho.org" target="_blank">jisho.org</a> or <a href="https://apps.apple.com/us/app/imiwa/id288499125" target="_blank">imiwa</a> app). From there I could explore compound words using those characters and their potential translations. Phones have been able to read barcodes for a decade. Why hadn't it been applied to Chinese characters until now? Just like barcodes, they are a specific image block that has a direct reference to a specific meaning. My guess is that recognizing barcodes had a financial convenience behind it. Deciphering words for poly-linguists was an afterthought that was finally worth supporting. This is now my favorite feature of my phone! <div><br /></div><div>What's more, the same <a href="https://developer.apple.com/documentation/vision" target="_blank">Vision API</a> allows you to select any text from any language and even objects in pictures and send it to search engines for further assistance. For instance, if you remember taking a picture of a tree recently, but don't know what folder or album you put it in, the Spotlight search can allow you to query across your photo library on your phone even if you never tagged the photo with a label for "tree." Below you can see how the device-based OCR indexing looked for the occurrence of the word "tree" and picked up the image of the General Sherman Tree exhibit sign in my photo collection of a trip to Sequoia National Park. You can see how many different parts of the sign there were where the Vision API detected the word "tree" in a static image. <div><br /><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjcK-y9U6sHrHsPO2HOa3qGEMTW19r1h-tD4q13jkCehOGuuBqALsRv1syF-KLmka70KvDSvlX4-l59dt_qT1ULT7GAZ2Mb-nJkhOZyGqyJ7bky6ibFEjTAsgmltl9XGpysdKzh29d3BWGNnBpuMPkGIS_5pFJxg6I20nB7dt8a9xcb24jcey9WoC1e/s1253/OCR%20Tree%20In%20Image.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1253" data-original-width="1213" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjcK-y9U6sHrHsPO2HOa3qGEMTW19r1h-tD4q13jkCehOGuuBqALsRv1syF-KLmka70KvDSvlX4-l59dt_qT1ULT7GAZ2Mb-nJkhOZyGqyJ7bky6ibFEjTAsgmltl9XGpysdKzh29d3BWGNnBpuMPkGIS_5pFJxg6I20nB7dt8a9xcb24jcey9WoC1e/w620-h640/OCR%20Tree%20In%20Image.jpg" width="620" /></a></div><p>But then I noticed that even if I put in the word "leaf" in my Spotlight search, my photos app would pull up images that had the shape of a leaf in them, often on trees or nearby flowers that I had photographed. The automatic semantic identification takes place inside of the Photos application with a machine learning process, which then has a hook to show relevant potential matches to the phone's search index. This works much like the face identification feature in the camera which allows the phone to isolate and focus the image on faces in the viewfinder when taking a picture. There are several different layers of technology that achieve this. First identifying figure/ground relationships in the photo, which is usually done at the time the photo is taken with the adjustable focus option selected by the user. (Automated focus hovers over the viewfinder when you're selecting the area of the photo to pinpoint as the subject or depth of focus of the photo.) Once the subject and ground can be isolated from the background, a machine learning algorithm can run on a batch of photos to find inferred patterns, like whose face matches to which person in your photo library. </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgSZvKOGYVibi6l0PCEsLDLUsAK_qMGWSj6KWvR0lpCXwXP5MuyAWCRfWqBtAVtlJN8Cl_MQgpKxKw9DukWz6KyXtTlDsgPMAXVZJBgcB7kFjUwPdpttYnXb0ffMchSHuRmIPjRtLmWG19ntJ2F4RpOZJwE-USpJf4msrMbolu3s9tMzo6O-_b63QIw/s3614/Semantic%20Search.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="2288" data-original-width="3614" height="406" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgSZvKOGYVibi6l0PCEsLDLUsAK_qMGWSj6KWvR0lpCXwXP5MuyAWCRfWqBtAVtlJN8Cl_MQgpKxKw9DukWz6KyXtTlDsgPMAXVZJBgcB7kFjUwPdpttYnXb0ffMchSHuRmIPjRtLmWG19ntJ2F4RpOZJwE-USpJf4msrMbolu3s9tMzo6O-_b63QIw/w640-h406/Semantic%20Search.png" width="640" /></a></div><p>From this you can imagine how powerful a semantic-discovery tool would be if you had such a camera in your eye glasses, helping you to read signs in the world around, you whether in a foreign language or even your own native language. It makes me think of Morgan Freeman's character "<a href="https://www.youtube.com/watch?v=nnT4zzHz2BQ" target="_blank">Easy Reader</a>" who'd go around New York looking for signs to read in the popular children's show Electric Company. The search engines of yester-decade looked for semantic connections between words written and hypertext-linked on blogs to string together. This utility we draw on every day uses machine derived indication of significance by the way people write web pages about subjects then based on which terms the blog authors link to which subject webpages. The underlying architecture of web search is all based on human action. Then the secondary layer of interpretation of those inferences is based on the amount of times people click on results that address their query well. Algorithms are used to make the inferences of relevancy. But it's human authorship of the underlying webpages and human preference for those links thereafter that informs the machine learning. Consider that all of web search is just based on what people decide to publish to the web. Then think about all that is not published to the web at present, such as much of our offline world around us. So, you can just imagine the semantic connections that can be drawn through the interconnectedness of our tangible world we move through everyday. Assistive devices that see the code we humans use to thread together our spatially navigable society are a web of inter-relations that will be easily mapped by the optical web crawlers we employ over the next decade.</p><p>To see test out how the Vision API deals with ambiguity, you can throw a picture of any flower of varying shape or size into it. The image will be compared to potential matches that can be inferred against a database of millions of flowers in the image archives of WikiCommons the public domain files which appear on Wikipedia. This is accessed via the "Siri knowledge" engine at the bottom of the screen on your phone when you look at an image (See below the small star shape next to "i"). While WikiCommons is a public database of free-use images, it could easily be expanded to any corpus of information in the future. For instance, there could be a semantic optical search engines that only matches against images in the Encyclopedia Britannica. Or if you'd just bought a book on classic cars, the optical search engine could fuzzy-match input data from your future augmented reality lenses to only match against cars you see in the real world that match the model type you're interested in.</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh67gsp0TbBKmYNz7kvhlHygn9TWLCqU2Ol2uLcCpx0Ig0v5mCHjrTgWgfZT0PfaUbG5GP_prTgH1SqNRqV0Z9p_rfSTPJ8K1aHTQlH84D8vSbI-4IXLJvCG1GdlYIsPUBFMFkDK-vwyb9qA5pE1wZn9ZDoMiL5XtlRZcsUxE44CsiYQ2at0bbcD65P/s1170/CrepeMyrtleLookup.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1170" data-original-width="1092" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh67gsp0TbBKmYNz7kvhlHygn9TWLCqU2Ol2uLcCpx0Ig0v5mCHjrTgWgfZT0PfaUbG5GP_prTgH1SqNRqV0Z9p_rfSTPJ8K1aHTQlH84D8vSbI-4IXLJvCG1GdlYIsPUBFMFkDK-vwyb9qA5pE1wZn9ZDoMiL5XtlRZcsUxE44CsiYQ2at0bbcD65P/w598-h640/CrepeMyrtleLookup.jpg" width="598" /></a></div><br /><p>Our world is full of meanings that we interpret from it or layer onto it. The future semantic web of the spatial world won't be limited to only what is on Wikipedia. The utility of our future internet will be as boundless as our collective collaborative minds are. If we think of the great things Wikimedia has given us, including the birth of utilities like the brains of Siri and Alexa, you can understand that our machines only face the limits that humans themselves impose on the extensibility of their architectures. <br /></p><p><br /></p><p> </p><p> </p><p><br /></p><br /></div></div></div>ncubeeighthttp://www.blogger.com/profile/07874174268038500927noreply@blogger.com0tag:blogger.com,1999:blog-4288746931605915378.post-61179046100627264062022-08-18T21:03:00.008-07:002022-08-18T22:02:41.694-07:00On the evolution of mechanical pencils<p>When I was a physics student, my father started giving me mechanical pencils. You’d think all possibilities would have been invented by the 1980s. But some of these pencils were incredible feats of engineering with fun new ways to click out and retract the pencil graphite. I think my father had a point that there was always more to be invented, even for something so simple. We'd frequently look at bridges and discuss ways that they could be designed differently to distribute the weight and talk through the engineers' decisions around the common designs around us in the real world. Every time I'd invent something new, I'd diagram it for him with my pencils and he'd ask probing questions about the design choices I had made. One day, for instance, I’d invented a "Runner’s Ratchet Shoe." I was an avid runner in those days and would get sore knees and shin splints. The Runner's Ratchet was a set of levers attaching to the runner's ankle that would put the downward motion of a runner's shoe into softened impact, cushioning the shock to the knees while redirecting that downward force into a forward springing action propelling the runner forward. He looked at my drawing and exclaimed, “Congratulations for re-inventing the bicycle!” It took me a moment to see that the motion of the foot between my invention and the bicycle was the same, while the muscle stress of my design was probably greater, obviating the benefit I was pursuing in the first place.<br /><br />
When I went to college. He started giving me fountain pens. I asked him why he was giving me all these. I could just use a biro pen after all. He said, “If you have a better pen, you’ll write better thoughts.” I could feel the effect of the instrument on the way I framed my thoughts. I was more careful and considered about what I wrote and how. Fountain pens slowed me down a little bit. They force you to write differently and sometimes they alter the pace of your writing and therefore the way you see the initial thought. It feels as if you’re committing something weighty to paper when you write with quill ink. I enjoyed the complexity over time. I took away the lesson that the instrument shapes the experience, emphasis of the path over the goal. Complexity, challenge and adversity in any process can make the end product more refined.<br /><br />
One day, I was working on a new invention that, yet again, had complex moving gears and levers. It was another piece of sports equipment I'd named, Ski Pole Leg Support. This invention was again to address knee soreness, this time from hours of sitting on ski lifts with dangling legs tugged on by heavy skis and ski boots. The device would allow skiers like me to suspend the weight of the legs onto a chair lift through the length of the ski-pole which would suspend a retractable support for the base of the ski boot. As I was visualizing the motion of the device in my head, I thought that what I really needed was a pen that could draw in 3 dimensions directly, the way I saw the device in my mind's eye. That way I could demonstrate the machine in spatial rendering rather than asking my professors to look at my 2D drawings and then asking <i>them</i> to to re-imagine them as 3 dimensional objects, Da Vinci style. </p><p>With this new inspiration, I designed a concept of just such a drawing tool. It looked like a fishbowl that had a pen-like proboscis which would move through a viscous solution to draw the lines which would suspend in location supported by the viscosity of the medium. I realized that a fountain pen moving through the solution would disturb the suspension solution itself through friction and therefore damage the rendered image as the drawing got more complex, unless the user drew from the bottom up. So, as an alternative to that approach, I imagined using an array of piezoelectric ultrasound speakers laid out in a grid in the base of the globe to direct shock waves through the solution, converging shock waves on certain points. At the locations where the shock waves would intersect, they would cause destructive interference and therefore increase the fluid pressure at the drawing coordinates desired. The solution at those shock-points would form a visible distillate from the solution's chemicals at these points, which would allow the drawing to persist. (The same way that a cathode ray tube uses a single string of electrons to paint a picture over a grid where the electrons strike the luminescent screen. But I'd use sound instead of electrons.) When the drawing was finished, you could preserve it temporarily, then erase it by spinning the globe. Centripetal rotation would pull the distillate apart so it could settle in the base to dissolve again and be reused, like the way a lava lamp uses its molten wax in solution, re-melting when it falls close to the heated base. I thought of it like a 3D version of the popular 2D Etch-a-sketch toy which was popular during my childhood. Might this drawing globe have market potential I wondered?<br /><br />
Before I shared the concept with toy manufacturers, I met an inventor named <a href="https://www.fandm.edu/magazine/archive/up-close/2008/02/14/up-close-michael-zane-70" target="_blank">Michael Zane</a> who had graduated from my college, Franklin and Marshall. He said he was willing to look over my product concepts and give me some advice. After he stared at my drawings a bit he gave me an approving glance. He said he liked my ideas. But he then commented “If you have any interest or ability outside of inventing, pursue <i>that</i> with a passion!” He thought his career path was not something that he’d wish on anybody. It was incredibly difficult to file and protect patents as you tried to sell your products in a fiercely competitive international market, he explained. He told me stories of many inventors whose lives were consumed and hopes dashed by throwing too much of their lives into one idea. So his advice was to live a different life than he saw on paper as my future. I did go to the US Patent and Trademark Office to research the ideas in the sector of several of my future patent ideas. But over the years I let my dreams my physical hardware inventions trickle out of my mind and focused my inventions on new problems and opportunities in the digital technology and internet space.<br /><br />
Looking back on my 3D Etch-a-sketch concept 30 years later, I see how a fountain pen for aquariums wasn’t going to find a mass market fit, even if I'd thrown all my gusto behind it. Mr. Zane had saved me lots of frustration and door knocking. I’m very glad I pursued those other interests I had at the time. Mechanical pencils and fountain pens are cool. But your life should be about more than something that has been reinvented 1000 times since the dawn of art. The inventions I focused on over the last three decades were team efforts, not lone entrepreneur stories. Coordinating groups of people in a company to build something new is the way to go these days. As the oft quoted adage goes: If you want to go fast, go alone; If you want to go far, go together. My teams have won patents, changed industries and impacted the lives of millions of people. It would have been a different life story if I'd just pursued selling plastic drawing toys at the start of my career.<br /><br />
I say all this because I have been toying around with VR and AR products for the last 8 years since my company decided to leap into the new industry. I’m starting to see echoes of what I’d wanted decades ago, now implemented in products. My colleagues and I go into VR to discuss technology and new product ideas. We tend to use <a href="http://Spatial.io">Spatial.io</a>, a virtual reality conferencing platform. One day I drew a diagram in the air with my fingers. I described a product concept I’d debated with one of my friends, <a href="https://www.linkedin.com/in/pdouriaguine/" target="_blank">Paul Douriaguine</a>, who was a work colleague from my time working at a startup in Sydney. We discussed the concept of using aerial photography and photogrammetry to assemble a virtual reproduction of an oil refinery or other physical facility or factory. We discussed using automated time-lapse captured images from multiple drone flights around the facility to watch for areas of discoloration that might indicate mold, rust or oil leaks which could be used to prevent physical structure damage. </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjmZ_9BRSUn5QZA3apzTix1tUalC8wYIsHqQuITQU2EJ2Z5jv2m32O_qGPJt5HRPaMBU7Hm7Q_xB0CHmnI4RKF-EdnlyptylmrBcXETWE0717yVBzlm8N4v1PiHtGirPOBwoGvg9rXZSiYZSqj1or-OSn5c0tU6VMCoB7gQbOIFrEdBcgOc093cs1x3/s1148/Christopher%20Scrawl.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1018" data-original-width="1148" height="568" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjmZ_9BRSUn5QZA3apzTix1tUalC8wYIsHqQuITQU2EJ2Z5jv2m32O_qGPJt5HRPaMBU7Hm7Q_xB0CHmnI4RKF-EdnlyptylmrBcXETWE0717yVBzlm8N4v1PiHtGirPOBwoGvg9rXZSiYZSqj1or-OSn5c0tU6VMCoB7gQbOIFrEdBcgOc093cs1x3/w640-h568/Christopher%20Scrawl.jpg" width="640" /></a></div><p></p><p>My rendering, pictured above, showed how a drone flight path, conducted autonomously or crowd-sourced, to capture structural images for analysis. Flight path was portrayed in green on the camera angles a drone would follow around the facility depicted in blue. Then my friend <a href="https://www.linkedin.com/in/johnpjoseph1/" target="_blank">John P Joseph</a>, who had actually worked on oil facilities with his own AR company jumped in and diagrammed how his team looked at the problem for long-distance pipeline maintenance and function monitoring. </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvqwxUpkbG_DCT8Jf6eOxwhddQv8QN0p6SEmPOW3kw_driRu9uV6vT2dHiR-lyA5KmLNBQPwuvYLRnf0fa0Xm6C-vxAQlAuLrpBQGolanAH55_ZY7lEI9lPs8YQOBYjVMrzeh8D-K3tTZmjImymxEivsXSKT9Nyc4yrf2F1GmfDxViSDt6VyjjLkPA/s1143/John.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1143" data-original-width="1010" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvqwxUpkbG_DCT8Jf6eOxwhddQv8QN0p6SEmPOW3kw_driRu9uV6vT2dHiR-lyA5KmLNBQPwuvYLRnf0fa0Xm6C-vxAQlAuLrpBQGolanAH55_ZY7lEI9lPs8YQOBYjVMrzeh8D-K3tTZmjImymxEivsXSKT9Nyc4yrf2F1GmfDxViSDt6VyjjLkPA/w566-h640/John.jpg" width="566" /></a></div><p>Then my other friend, <a href="https://www.linkedin.com/in/oyiptong/" target="_blank">Olivier Yiptong</a>, jumped in to talk about how to establish the server architecture to achieve the service we were describing mechanically across pipes, facilities and flying devices. </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5or1MzP8K1LHvIv5W6waC6DJyuz24oSLzWhPqKT8fnCH8s04UBLvxTN8-exLN20nRer3KNlsGSg_j-Ct8MnUs6UrFZLcfm2_fGc96exuyYnEGUI7H8MgqkhAJtf1suKv6UJye2hIL2WuAX7QF1MCctN9tGX85_9IMiP1Wmrey9yZe0H9ooUpbovqt/s624/Olivier%20&%20Oil%20Rigs2.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="574" data-original-width="624" height="589" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5or1MzP8K1LHvIv5W6waC6DJyuz24oSLzWhPqKT8fnCH8s04UBLvxTN8-exLN20nRer3KNlsGSg_j-Ct8MnUs6UrFZLcfm2_fGc96exuyYnEGUI7H8MgqkhAJtf1suKv6UJye2hIL2WuAX7QF1MCctN9tGX85_9IMiP1Wmrey9yZe0H9ooUpbovqt/w640-h589/Olivier%20&%20Oil%20Rigs2.jpg" width="640" /></a></div><p>It was an amazing thing to watch. Three people with entirely different backgrounds (business, product and engineering) had assembled sporadically. In the span of about 15 minutes, all of us were able to rapidly discuss different layers of a product and service initiative to achieve understanding of a range of opportunities and limitations through a process that might have taken hours of preparation and presentation time in any other medium.</p><p>The experience made me reflect back in time. My first conception on the best way to draw an invention 3 decades ago was to make a product leap from paper to fish bowl globes. Here I was today, inside the globe with other clever folk inventing in a shared virtualized space. In Spatial, I was able to diagram the concept in a fun and effective way, even if a bit sloppy because I didn't have a 3D ruler and protractor yet! (Wait a tick... what if... oh never mind...)</p><p>VR is an old idea receiving heaps of attention and investment right now. Just as some say that the Apollo missions were something the past could afford that the present could not, I think that VR is an idea that couldn’t have found a market when I was young, but that actually can address new use cases and interesting applications now. Perhaps it doesn’t pass the Larry Page toothbrush test: something you’d want to use 2 times a day. But it is significantly valuable in what it can convey experientially. I find myself preferring it over the camera-gaze experience of pandemic life video conferencing platforms. Now when my engineering and product expert friends want to meet, we typically opt for the VR meeting as it feels more human to move and gesture in a space rather than sit transfixed, staring into a camera lens. Perhaps the recent pandemic has created a great enough frustration in general society that we yearn to get back to a 3D experience, even in remote conference meetings. Seeing people through a flat screen while posing to be rendered in 2D is an artifice of the past times. I suspect that eventually people will come to prefer 3D to meet when they can’t meet in person more broadly. 2D seems to be an awkward adolescent phase for our industry in my view.</p><p>These VR designing pens are similar to the fountain pens and mechanical pencils of yesteryear, but in a medium that is just being created for our next generation of inventors and future physicists now. Our tools are imprecise at present. But in the coming years they will be honed because of the obvious benefit and efficiency they bring in facilitating social connections and collaboration. Over the pandemic years I've gained a new form of conversational venue that has caused more discussions to happen in better ways than the technologies I'd become used to before. I will continue these brainstorms in virtual 3D environments because, when separate from my team, I still want to communicate and share the way we are used to in physical space. But there, we obviate separations in space while still keeping the robust density of media that we can share in our globe-like environment, unlimited by the restrictions of what can be crammed through a small camera aperture.</p>ncubeeighthttp://www.blogger.com/profile/07874174268038500927noreply@blogger.com0tag:blogger.com,1999:blog-4288746931605915378.post-89057400397444216872022-07-08T09:00:00.000-07:002023-02-28T10:25:08.471-08:00The emerging technology for stereoscopic media in the home<p>A decade ago, I followed the emergence of affordable <a href="https://www.prnewswire.com/news-releases/vizio-announces-new-lineup-of-3d-smart-hdtvs-featuring-vizios-award-winning-theater-3d-technology-and-vizio-internet-apps-137004453.html" target="_blank">3D televisions</a> at the Consumer Electronics Show for experiencing movies at home. One of the problems of adoption of the technology was the lack of media that mainstream audiences could view on the devices. If you were to have one of these advanced TVs, there was a lack of viewable content streamed or sold to those same households. It was just too much work to pair the content with the device. Filming in stereoscope is a complex process that isn't well supported by the commercial media channels to the home as it is for the cinema. </p><p>While stereoscopic headsets are being released in significant volumes following the wave of VR as a mainstream consumer experience, the content availability challenge still looms. (IDC projects <a href="https://www.androidcentral.com/quest-2-2021-most-sold" target="_blank">30 million headsets</a> a year to be released by 2026 across multiple vendors. Meta claims <a href="https://uploadvr.com/quest-2-sold-almost-15-million-idc/" target="_blank">15 Million</a> sold to date.) This time the gaming sector is leading the charge in new media creation with 3D virtual environment simulation using world building software platforms distributed by Unity & Epic Games. The video gaming industry dwarfs the scale of cinema in terms of media spend with US gaming sector alone totaling over <a href="https://truelist.co/blog/gaming-statistics/#:~:text=In%202021%2C%20gamers%20in%20the,compared%20to%20the%20year%20before." target="_blank">$60 Billion annually</a> in contrast to cinema at <a href="https://www.motionpictures.org/wp-content/uploads/2022/03/MPA-2021-THEME-Report-FINAL.pdf" target="_blank">$37 Billion</a>. So this time around the 3D media story may be different. With lower production cost of using software media creation and a higher per-customer revenue stream of game sales, there will be more options than I had with my 3D Vizio TV.</p><p>I recently discovered the artistry of <a href="https://www.patreon.com/realvr" target="_blank">Luke Ross</a>, an engineer who is
bringing realistic 3D depth perception to legacy video games originally rendered
in 2D. His technique is currently applied to allow 3 dimensional "parallax" depth to a 2D scene by having the computer render parallel images of the scene, depicted to each eye in a
head-mounted-display sequentially. Leveraging the way that our brains
perceive depth in the real world, his technique persuades us that
typically flat-perspective scenes actually are deep landscapes, receding
into the distance. Filming the recent Disney series <a href="https://www.youtube.com/watch?v=gUnxzVOs3rk" target="_blank">The Mandalorian</a> was conducted using the same world building programs used to make video game simulations of spacious environments. Jon Favreau, the show's director, chose to film in studio using Unreal Engine instead of George Lucas style on-scene filming because it drastically extended the world landscapes he could reproduce on his limited budget. Converting The Mandalorian into Avatar-like 3D rendering for Vizio TVs or VR head mounted displays would still be a huge leap for a studio to make because of the complexity of fusing of simulated and real sets. But when live action goes a step deeper to simulate the actors movements directly into 3D models, such as the approach of Peter Jackson's <a href="https://www.youtube.com/watch?v=w_Z7YUyCEGE" target="_blank">Lord of the Rings series</a>, rapid rollouts to 2D and 3D markets simultaneously becomes far more feasible using Luke Ross "alternate-eye-rendering" (abbreviated AER).</p><p>Stereoscopic cameras have been around for a long time. Capturing parallax perspective and rendering that same two camera input to two display outputs is the relatively straightforward way to achieve 3D media. What is so compelling about the concept of AER is that the technique achieves depth perception through the use of a kind of illusion which occurs in the brain's perception of synthesized frames. Having a stereoscopic play-through of every perspective a player/actor might navigate in a game or movie is exceedingly complex. So instead, Luke moves a single perspective through the trajectory, then having the display output jitter the camera slightly to the right and left in sequence. When right glimpse happens, input to the left eye pauses. Then the alternate glimpse is shown to the left eye while right eye output pauses. You can envision this by blinking your right, then left eye while looking at your finger in front of your face. Each eye sees more <i>behind the close object's edges</i> than the other eye in that instant. So objects near appear to hover close to you against the background, which barely moves at all.</p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhrwlPUWQ2IMTgJFhCFcGhS3IWr9WrbL60caRQWA6nVh0QskWb_8CwQmL0GHQba62GlqlxK_k3uEkN0etY9Ryy1-14Tk8ZgZbhyp-1y5FP8sdz7w43DPmGAsWhVqT-7tkjpVexdnuKzNrbM5x5F-Rjq4lPKaSRvEq4PJiHtkiRdbpEdcIRo-tQPLlow/s5108/Final%20Fantasy.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="2868" data-original-width="5108" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhrwlPUWQ2IMTgJFhCFcGhS3IWr9WrbL60caRQWA6nVh0QskWb_8CwQmL0GHQba62GlqlxK_k3uEkN0etY9Ryy1-14Tk8ZgZbhyp-1y5FP8sdz7w43DPmGAsWhVqT-7tkjpVexdnuKzNrbM5x5F-Rjq4lPKaSRvEq4PJiHtkiRdbpEdcIRo-tQPLlow/w640-h360/Final%20Fantasy.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Vast landscapes of Final Fantasy VII appear more realistic with parallax depth rendering. <br /><a href="https://www.theverge.com/2022/8/10/23300463/ffvii-remake-intergrade-pc-vr-luke-ross-mod">https://www.theverge.com/2022/8/10/23300463/ffvii-remake-intergrade-pc-vr-luke-ross-mod</a></td></tr></tbody></table><p>The effect, when you perceive it for the first time, astounds you with how realistic the portrayed landscape becomes. It's like having a 3D IMAX in your home to experience this with a VR headset. The exciting thing is that game designers and directors don't have to rework their entire product to allow this to be possible. AER can be done entirely post-production. It is still a fair bit of work. But much more feasible to achieve on grand scale than rendering all legacy media anew in 3D VR stereoscopic view. This makes me believe that it will be a short matter of time before this will be commonly available to most readers of my blog. (Especially if I have anything to do with this process.)</p><p>You may not yet have a consumer VR headset at your disposal yet. But currently <a href="https://www.hp.com/us-en/shop/pdp/hp-reverb-g2-virtual-reality-headset" target="_blank">HP Reverb</a>, <a href="https://www.picoxr.com/us/" target="_blank">Pico</a>, <a href="https://www.meta.com/quest/" target="_blank">Meta Quest</a>, and <a href="https://www.vive.com/us/" target="_blank">HTC Vive</a> are all cheaper than my 3D Vizio TV. The rendered experience of a 65 inch TV in your living room is still typically smaller in your field of view than a wide field of vision VR headset. So over coming years, many more people may opt for the nearer screen over the larger screen. When they do, more people will start seeking access 3D content which now, thanks to Luke, has a more scalable way to reach the market for this emerging audience.</p><p><br /></p><p><br /></p><p><br /></p><p><br /></p>ncubeeighthttp://www.blogger.com/profile/07874174268038500927noreply@blogger.com0tag:blogger.com,1999:blog-4288746931605915378.post-9917232092728336652022-06-04T18:19:00.000-07:002024-01-23T17:32:33.737-08:00Going into the cave to see the world differently<p>Growing up in Oregon, where there are many volcanoes, my family would
often go spelunking in the lava tubes left over from past eruptions.
These caves formed where the outside of a river of lava cooled, letting
the inside of the lava flow further downhill, leaving a hollow husk of
the river's rock shape to climb in. Caves always invoke a kind of supernatural
sense of awe for me. The sensory deprivation of the darkness gives a sense of humility we feel when we take ourselves out of societal
context, bringing about a sense of ascending out of our ordinary
self-awareness. When I travel abroad, I enjoy exploring caves and seeing the lore that has sprung up about them. </p><p></p><p>In <a href="https://leapingaroundtheworld.wordpress.com/indonesia-bali/" target="_blank">Bali, Indonesia</a>
are a series of caves that one can explore at
the basin of rivers flowing down the sides of the volcanic island.
However, the caves of Bali are man-made. Many centuries prior,
people used tools to carve out the caves at the base of the river into a
series of sheltered rooms. When the monsoon rains came, all the caves
would be submerged, only to be navigable again the following year.<br /></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8hNSBp-KEa9gb46VyQmSS6sMFE3Ga8DnedP5nj4m0H0pdOtw-OUrQ9pvvdvW7T8ywi5-33W59tfmeAMFBOh2wgRcoQRbemNudaC-sY-fbbW57WVvKNozcPTUBYM2aRnXdXaHg4_khF5pXPmnLeNc86-Ta0VZsJtOHTBHQFjgNgzuCxu35vhX1jkIj/s640/BaliCave.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="480" data-original-width="640" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8hNSBp-KEa9gb46VyQmSS6sMFE3Ga8DnedP5nj4m0H0pdOtw-OUrQ9pvvdvW7T8ywi5-33W59tfmeAMFBOh2wgRcoQRbemNudaC-sY-fbbW57WVvKNozcPTUBYM2aRnXdXaHg4_khF5pXPmnLeNc86-Ta0VZsJtOHTBHQFjgNgzuCxu35vhX1jkIj/w640-h480/BaliCave.jpg" width="640" /></a></div><p></p><p>The islands of Indonesia used to be home to a different species of hominid
prior to Homo Sapiens' arrival. <a href="https://en.wikipedia.org/wiki/Homo_floresiensis" target="_blank">Homo Floresiensis</a> were a smaller hominid
who also resided in caves like the <a href="https://www.science.org/content/article/ancient-dna-puts-face-mysterious-denisovans-extinct-cousins-neanderthals" target="_blank">Denisovans</a> of Siberia. Homo Floresiensis is suspected to have evolved differently from Homo
Sapiens and Denisovans, to be smaller in stature because of the constrained natural resources of the Indonesian archipelago. Many
other species of animal discovered there were also diminished in
stature in contrast to their respective ancestors on the Asian continent. When I
climbed into the cave shelters on Bali, which likely were carved many
thousands of years later than Floresiensis, I pondered the significance of caves as
protective and spiritual sanctuaries for our Hominid species through the
millennia. Echoes of these other hominids live on in our DNA as some of
them cohabitated and inter-bred with Homo Sapiens as the latter spread
around the globe.</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi-N4Emc3jrVPtMSR_dFsanwS-g9Mr3q6QxVvB00Dg8IO5uurcg2e6mBgtEPR4CIfnEM897OXvsFZWqqZUczQroY-D049NkvLcwsQifLmD7ObzxeghG2k_tOyooVuLq0HS6t7qYFp0dHaDUp9G85EGLcNbUzqxCGacXB8nmRb7vyCt25RnSvh4Cl5j_/s640/CaveView.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="480" data-original-width="640" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi-N4Emc3jrVPtMSR_dFsanwS-g9Mr3q6QxVvB00Dg8IO5uurcg2e6mBgtEPR4CIfnEM897OXvsFZWqqZUczQroY-D049NkvLcwsQifLmD7ObzxeghG2k_tOyooVuLq0HS6t7qYFp0dHaDUp9G85EGLcNbUzqxCGacXB8nmRb7vyCt25RnSvh4Cl5j_/w640-h480/CaveView.jpg" width="640" /></a></div><p></p><p>In <a href="https://leapingaroundtheworld.wordpress.com/sri-lanka/" target="_blank">Sri Lanka</a>, caves served as monastic retreats for monks and mystics as Buddhism swept through the region, encouraging a spiritual path of asceticism and meditation away from society's hubs of bustle and industry. The giant mesa of <a href="https://en.wikipedia.org/wiki/Sigiriya" target="_blank">Sigiriya</a> was one such retreat with ornate frescoes painted on the inside of the caves from a period of time that a local king sought to fashion the mesa into a paradisiacal castle fortress. The inside of the cave is a representation of the ideas of beauty of the time, the way a camera obscura projects the outside world through a pinhole lens or mirror into a dark chamber, now frozen in time for next era's visitors to witness.<br /></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMU2mxhjCkJCghc0v1zq-YhW_GDkSWSH7eLY-rmNMCFULNQFSxyzEfSbzGYHt4CxhEOjz8g3S5hZU5Q_c_Jk7pBfuNA3jY8jdM2s3K_irTjypW_xKD_M9AgZzZ9RZBLTmjkgJSHLyxVjAHZX6vVik6n65lyLYQHJ4V3cTLHbbfJm9UE3Ex28HoA4dP/s800/SigiriyaCaveFrescoes.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="516" data-original-width="800" height="412" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMU2mxhjCkJCghc0v1zq-YhW_GDkSWSH7eLY-rmNMCFULNQFSxyzEfSbzGYHt4CxhEOjz8g3S5hZU5Q_c_Jk7pBfuNA3jY8jdM2s3K_irTjypW_xKD_M9AgZzZ9RZBLTmjkgJSHLyxVjAHZX6vVik6n65lyLYQHJ4V3cTLHbbfJm9UE3Ex28HoA4dP/w640-h412/SigiriyaCaveFrescoes.jpg" width="640" /></a></div><p>In <a href="https://leapingaroundtheworld.wordpress.com/indonesia-bali/" target="_blank">Thailand</a>, the lineage of Rama kings had summer retreats near the caves under the country's mountainous southern coast. A few hours hike from the beaches you can visit a vast underground cavern replete with an underground temple pavilion and many images of the pre-enlightenment Siddhartha. He sits as if frozen in time in the gesture of pointing at the ground, where he resolved to stay until he crossed the precipice of Nirvana. (Siddhartha was one of many Buddhas in the broader tradition. But in the Thai tradition, this particular Buddha is important to the culture because he represented the path of independent transcendence for all beings, an ideal for everyday community to pursue in emulation.) Outside the national palace of Bangkok, there is a small pavilion that looks exactly like the pavilion Rama V had built in <a href="https://mymodernmet.com/phraya-nakhon-cave/" target="_blank">Phraya Nakhon</a> cave. </p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjXoFF57QgcZT_4BoZLUKxhPPFlumI_uiwQtNoDVPDV1cgCwlfwI55ZLa-H_WWm4Yav1M8TifLCwmiG2hsRP50i7i2bwQ73v2_-NFc88zdpamm4QKgMh_Fqodds8HaSA2ULmJGq8oeTZPE4WImYz25bv8uqe4PTlZGZgpBfsDhGSHwmzvobKxQuu6I1/s1628/PhrayaNakhon-3.jpeg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="916" data-original-width="1628" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjXoFF57QgcZT_4BoZLUKxhPPFlumI_uiwQtNoDVPDV1cgCwlfwI55ZLa-H_WWm4Yav1M8TifLCwmiG2hsRP50i7i2bwQ73v2_-NFc88zdpamm4QKgMh_Fqodds8HaSA2ULmJGq8oeTZPE4WImYz25bv8uqe4PTlZGZgpBfsDhGSHwmzvobKxQuu6I1/w640-h360/PhrayaNakhon-3.jpeg" width="640" /></a></div><p>Perhaps the king would use that pavilion outside his window as a mental reminder of the symbolic connection to the cave where the monks would meditate for the transcendence of Samsara for all of civilization. Venturing to the north through <a href="https://leapingaroundtheworld.wordpress.com/laos/" target="_blank">Laos</a>, there are hundreds of caves to explore where local citizens create temples of reclining Buddhas deep in the karst mountains to revere the monastic traditions that spread through the country as Buddhism spread north from India to China.</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhHpSN0HNssYjEsdLQaFNT53Cg-XF-Pzg6D4ol0MoOSRj2VMVhNV96e-DAghk5eCnaGmysRpvAn8sukMzLDRlsg-4W2efuLcyBiwG7TVRuM8T84KobaCQspd5Adr2MNjufnRwGxrU1h515t412yPyXcBm136iSOJJA5emYsrmWvwdHAIaw4dmiiB7jL/s640/LaotianBuddhaCave.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="640" data-original-width="480" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhHpSN0HNssYjEsdLQaFNT53Cg-XF-Pzg6D4ol0MoOSRj2VMVhNV96e-DAghk5eCnaGmysRpvAn8sukMzLDRlsg-4W2efuLcyBiwG7TVRuM8T84KobaCQspd5Adr2MNjufnRwGxrU1h515t412yPyXcBm136iSOJJA5emYsrmWvwdHAIaw4dmiiB7jL/w480-h640/LaotianBuddhaCave.jpg" width="480" /></a></div><p></p><p></p><div class="separator" style="clear: both; text-align: left;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjLRDbML29fKEsghUOgzARmdB8RByIMLXHuGl4CRqYYrje4hJI7hOVvpG7tsyEKCz4iNx3Ucd-yAtbiH8PMGb5BA_KMgIo6MCQAI3N0kKaIZLRrpzQwuqP3oqtZ75s3zwyOFq4cJFwDvkl8EHefBly-AMxI94qOoCoIynYYIRu859sG66roNNhQLOwS/s2048/TreasuryPetra.jpg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="2048" data-original-width="1529" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjLRDbML29fKEsghUOgzARmdB8RByIMLXHuGl4CRqYYrje4hJI7hOVvpG7tsyEKCz4iNx3Ucd-yAtbiH8PMGb5BA_KMgIo6MCQAI3N0kKaIZLRrpzQwuqP3oqtZ75s3zwyOFq4cJFwDvkl8EHefBly-AMxI94qOoCoIynYYIRu859sG66roNNhQLOwS/w299-h400/TreasuryPetra.jpg" width="299" /></a>In Jordan, the <a href="https://leapingaroundtheworld.wordpress.com/jordan-the-carved-cliffs-of-petra/" target="_blank">Nabatean people</a> used caves as tombs and fashioned
majestic facades in the style of architecture they'd seen elsewhere by chiseling and polishing limestone like a sculptor. Allegedly, once their
city of caves became a place for mystics and pilgrims to visit, they
came up with an economic scheme to require all visitors to exchange
their gold coin for lesser value coins they would mint locally, leading
Petra to become a very wealthy city offering its citizens relative
prosperity based on the city they'd constructed out of sand with deft artifice of the chisel. While the caves started out as a form of palace-like grandeur, the interiors just provide shelter from the elements and are unadorned. Walking into them is a transformative journey from outward splendor to a sense of inner awe. You experience the awe of natural places with the human artifice dropped. To this day the Bedouin people there welcome tourists to explore the
caves with concessions and hiking provisions provided at tents set up to
give you respite from the sun while you enjoy tea and refreshment. </div><p>In Egypt, caves were tombs for royalty which provided a path to eternal life. Excavated only recently, these caves portray amazing mystical traditions and understanding of the cycle of life, death and rebirth enacted through the worship of human representatives of archaic gods. Many of the gods of Egypt are depicted as animal and as chimeric human forms. The mortals who were revered as leaders of the society may have been seen as temporal representations and sporadic embodiments of the deities in ephemeral time. Spirits of the underworld would gift life to humans through the the Ankh symbol. When the kings and queens of Egypt died, they were entombed in ornately decorated caves that remain well preserved to this day near <a href="https://leapingaroundtheworld.wordpress.com/egypts-ancient-side/" target="_blank">Luxor and Aswan</a>.</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhQbBnl5GPRIf0GP_OufXRjhMfHaAkEMgHmctF6Yb9SdAGNHhPUc-rDBhk_s6iecYL7447n2b7L_YN1l29sYllQ3D2DzBJwVXrYq4PjuC1cmLrQOg6jIOlMdonTlqlEub_umlgdVk6aoyuIMvpRkcp-LygeDt-YRxyfkibhoQfwBoMUm197WQshxtgM/s2165/AswanTomb.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1572" data-original-width="2165" height="464" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhQbBnl5GPRIf0GP_OufXRjhMfHaAkEMgHmctF6Yb9SdAGNHhPUc-rDBhk_s6iecYL7447n2b7L_YN1l29sYllQ3D2DzBJwVXrYq4PjuC1cmLrQOg6jIOlMdonTlqlEub_umlgdVk6aoyuIMvpRkcp-LygeDt-YRxyfkibhoQfwBoMUm197WQshxtgM/w640-h464/AswanTomb.jpg" width="640" /></a></div><p>In <a href="https://leapingaroundtheworld.wordpress.com/mexico-yucatan/" target="_blank">Mexico</a>,
caves are considered the domain of the rain god Chaac. The many cenotes
(limestone caves) of the Yucatan peninsula formed in part due to their
being on the edge of an impact crater of the <a href="https://en.wikipedia.org/wiki/Chicxulub_crater" target="_blank">Chicxulub</a> meteor, which led
to the extinction of the dinosaurs 66 million years ago. Maya priests would create altars at the bottoms of the caves in supplication
to Chaac to bring rains for the sustenance of the maize crops. These fresh water cenotes
served the flatland jungle areas of Yucatan with fresh drinking water that
sustained the inland population until the civilization's collapse in 900
AD, after which the Maya transitioned to building their cultural hubs
on the coast of Yucatan rather than the jungles.</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgJK4t4CgF5xnR7-7VQr95wGL2i6_vA-HHVePtd0F_DQdV-fNw_uVXfjf8Ws7NqZ_OYDGPTn19T7EWUBrHste22e_m_GDavEKmrNrjsz5IPTTW5-q2JIn0CzVfT4TJfhDGu4_J5l2ROkiBrsVMMQUhd3oud8n1-repV1TObgSmAd7dVHSiFuaX0QZJr/s2048/CenoteYucatan.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1536" data-original-width="2048" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgJK4t4CgF5xnR7-7VQr95wGL2i6_vA-HHVePtd0F_DQdV-fNw_uVXfjf8Ws7NqZ_OYDGPTn19T7EWUBrHste22e_m_GDavEKmrNrjsz5IPTTW5-q2JIn0CzVfT4TJfhDGu4_J5l2ROkiBrsVMMQUhd3oud8n1-repV1TObgSmAd7dVHSiFuaX0QZJr/w640-h480/CenoteYucatan.jpg" width="640" /></a></div><p>Not all caves can be visited by avid tourists like myself. Because of the significance of certain caves to our cultural lore, archeologists and artists are making them discoverable via facsimile copies so that we can all share our fascination with this part of our history and culture. One such archeological effort was captured in the Werner Herzog film, <a href="https://www.imdb.com/title/tt1664894/" target="_blank">Cave of Forgotten Dreams</a>. In this movie, he filmed the Chauvet-Pont d’Arc cave in
France with a group of anthropologists as his guide. The cavern is covered with Paleolithic
art depicting animals and an image of a human hand outlined as if it were a signature by the artist. Herzog and the
anthropologists reveal the story of our human ancestors' intentions in decorating this cave and speculate on
the daily life and potential spiritual aspects of the people who
had made the cave paintings. What was so unique about this film beyond
the site itself was that Herzog filmed it in 3D, allowing the viewer to
get a sense of depth that is usually not depicted in cinema. Looking
into the dark recesses of the cave as they entered feels a bit like the suspense of going into a cave in the first person experience. </p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjSuZVa8UUJbwRHAwQGAPrSIiFYilYo7INc3J1LtfaX_qJMYk5lCDJ_8ddnVR63HlNnIFfd2cOOKDLx-0_1aE3KoBWa1Q0tD45843iPgCUYiZCPZV1xrg9fIxObKUu7a5b22wO94kXfmBDzCQm7Y8fT2vhqEBXcIZwv0v93Y1vO8ecXUERaj3TwQ5ZBx1g/s750/ChauvetCave.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;" target="_blank"><img border="0" data-original-height="750" data-original-width="750" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjSuZVa8UUJbwRHAwQGAPrSIiFYilYo7INc3J1LtfaX_qJMYk5lCDJ_8ddnVR63HlNnIFfd2cOOKDLx-0_1aE3KoBWa1Q0tD45843iPgCUYiZCPZV1xrg9fIxObKUu7a5b22wO94kXfmBDzCQm7Y8fT2vhqEBXcIZwv0v93Y1vO8ecXUERaj3TwQ5ZBx1g/w400-h400/ChauvetCave.jpg" width="400" /></a></div><b></b><p></p><p style="text-align: center;"><b><span style="font-size: x-small;">(Chauvet Cave Drawings. Source: https://whc.unesco.org/en/list/1426/)</span></b></p><p><span id="docs-internal-guid-efb143e1-7fff-b4cc-10fd-11dd103a1816" style="background-color: transparent; color: black; font-family: Arial; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"></span></p><p>Seeing pictures and movies about caves can convey a lot of cultural context. But stepping into them is an altogether different experience that is hard to simulate through media. One of my most touching experiences I remember was a trip to Okinawa, where my grandfather had served in the military during WWII. He had told our family stories of the horrors of past wars he'd had to endure. I rented a car to drive to the sites I'd heard about from my grandfather's stories. One of the most touching experiences was visiting the <a href="https://www.himeyuri.or.jp/EN/info.html" target="_blank">Himeyuri Peace Memorial</a>. This was the site of a cave where a group of young students had been conscripted into serving as a makeshift hospital for wounded soldiers. When the military advanced here, they were instructed to throw tear-gas into the cave to get any civilians to surrender. The children were too afraid to come out of the cave and suffocated there. The people of Okinawa built the museum on the site to commemorate the lives of the students who perished there and to chronicle the suffering of all people during the war, with a hope to see the end of wars globally. </p><p>Particularly touching for me was a dark room where each student was introduced by name with a placard of what was known about their lives. I saw the faces of youth and read the chronicles of their last days trying to save the lives of as many wounded people as they could. I came to understand the sense of fear they must have had when they crawled into the cave, never to come out. The final exhibit was a physical reconstruction of the cave that memorial visitors could go into to see what the mouth of the cave they stared up at must have looked like. This facsimile cave where I stood, looking out at the green trees in the sun conveyed something that reading and hearing stories never could.</p><p> </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhPhoWAfud5tujjKEugXu4EzQXAmTsu4iVlVyh0NhGQF29JafjQA0ExqsrqdL33O7m2JXXrvU9QZpMltcEQRgET8csmSDCGmZBVcmbW9Emwk_LYPIJ-cf4IhoYm2ymEquXai2fL2mf09oHGfc4KIhiSlFBItuIt_bhgM5eIwAgen9htoA-yaopckUU_n0o/s1100/himeyuri-monument-2_orig.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Source: https://jinotourblog.weebly.com/himeyuri-peace-museum" border="0" data-original-height="796" data-original-width="1100" height="464" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhPhoWAfud5tujjKEugXu4EzQXAmTsu4iVlVyh0NhGQF29JafjQA0ExqsrqdL33O7m2JXXrvU9QZpMltcEQRgET8csmSDCGmZBVcmbW9Emwk_LYPIJ-cf4IhoYm2ymEquXai2fL2mf09oHGfc4KIhiSlFBItuIt_bhgM5eIwAgen9htoA-yaopckUU_n0o/w640-h464/himeyuri-monument-2_orig.jpg" title="Himeyuri Peace Museum" width="640" /></a></div><p></p><p style="text-align: center;">(<b><span style="font-size: x-small;">Himeyuri Peace Museum Cave. Source: https://jinotourblog.weebly.com/himeyuri-peace-museum</span></b>)<br /></p><p>So why all this talk of international travel, human heritage, history and venturing into caves you may wonder? I see it as metaphorically similar to the concept of emerging VR technology. In this new context, we put on a visor to give an ornate expression of something elsewhere (or elsewhen) by temporarily silencing the environment immediately around us. One of my earliest experiences of VR's potential was a tour of an Egyptian tomb for <a href="https://www.oculus.com/experiences/rift/1491802884282318/" target="_blank">Nefertari</a>. The simulated experience stitches together thousands of scaled images to give the viewer a sense of being present in the physical tomb in Luxor, Egypt. While being in a place physically has a visceral impact that is hard to describe, being able to represent an historic location through media gives the opportunity to have a glimpse of the same space such that many more people, who may not have the opportunity to travel, can experience it.</p><p>Now that LIDAR scanning has emerged as an accessible mainstream technology, we are able to create more of these realistic captures of caves and world monuments in ways that people anywhere can experience. A decade ago, I was able to experience the <a href="https://keckcaves.org/" target="_blank">Keck Caves</a> archeology project which seeks to capture underground archeological digs using LIDAR scanning with precise depth measurement which can be rendered as a point-cloud of map data to track physical attributes of a site during an excavation. Once the site is scanned, it can be studied by an unlimited number of later archaeologists in digital renderings without having any impact on the physical site itself by donning a set of glasses that reconstruct the parallax effect for depth perception. I was able to put on some early prototypes of VR 3D glasses then which allowed me to see depth perspective of the caves by rendering different images to my right and left eyes. </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjSWDa0IiyWGgPbauWIsThNLMCCahV2ewLXdMf8ra_2pF7hIv9rGD4-_vXXBVNMJPAvke3kUF-oyCJUQugv4JPx0Wx86-ecfolIXohR_HFJISeCYQiOwYkYJXrq01R9RTl6mrJhrfvebhm33sSw0Jv5TjAuQDYsZv4YLwYU4t0yz0LHEpRypzjVIcb1i2w/s1988/KeckCavesVR.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1704" data-original-width="1988" height="343" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjSWDa0IiyWGgPbauWIsThNLMCCahV2ewLXdMf8ra_2pF7hIv9rGD4-_vXXBVNMJPAvke3kUF-oyCJUQugv4JPx0Wx86-ecfolIXohR_HFJISeCYQiOwYkYJXrq01R9RTl6mrJhrfvebhm33sSw0Jv5TjAuQDYsZv4YLwYU4t0yz0LHEpRypzjVIcb1i2w/w400-h343/KeckCavesVR.jpg" width="400" /></a></div><p><span id="docs-internal-guid-8afd0df0-7fff-e59d-5824-46123142b912" style="background-color: transparent; color: black; font-family: Arial; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"></span></p><p>I was delighted to discover that archeologists are already publishing VR renderings of historical sites for the general consumer to experience the Egyptian tombs without having to fly around the world. When I planned my recent visit to Angkor Wat temple complex in Cambodia, I donned my VR headset to fly over the landscape in the Google Earth (former Keyhole acquisition) with pictures of the layout of the historic site. It's very exciting to see artists and game developers embracing 3D medium as a new means for artistic expression. But as a traveler, I want to see more world heritage sites. I tremendously admire the work of <a href="https://zamaniproject.org/" target="_blank">Zamani Project </a>and <a href="https://www.cyark.org/projects/" target="_blank">Cyark</a> to preserve more of our historical landmarks for future generations to discover via new media interfaces.</p><p>When I introduced my mother into the experience of VR, she made an astute observation. She didn't want to see synthetic renderings of artistic spaces. She wanted to use the lenses to see the astronomical photographs from the James Web Space Telescope, to see the edges of our universe in greater detail like a planetarium would show. Being able to fly to the edges of the observable cosmos based on the data we have on the structure of our tiny 13.8 billion year old spacetime bubble is not an incomprehensible challenge for the educators and developers to bring about in the coming decade. We have only just learned to see that far. Now making that vision accessible to all is within our technical grasp. </p><p>While the artistic opportunities of expression of VR are profound alone, it also allows us to see tremendous breadth of human history and culture by capturing the physical world for educational and tourist exploration. While it may strike some people as odd to wear glasses or head-mounted displays, the vision we can obtain by going into the dark helps us to see further than we otherwise might.<br /></p><p><br /></p><br />ncubeeighthttp://www.blogger.com/profile/07874174268038500927noreply@blogger.com0tag:blogger.com,1999:blog-4288746931605915378.post-71366342897184535612022-05-01T16:43:00.000-07:002022-06-15T07:52:27.634-07:00Approaches to enhancing cartography and location-discovery over the web<p>A French postal worker, who used to fly the coast of Africa delivering postal dispatches, wrote a delightfully witty book I read long ago. In the story of <a href="https://en.wikipedia.org/wiki/The_Little_Prince" target="_blank">Le Petit Prince</a>, the author, Antoine de Saint-Exupéry, talks about how his plane broke down in the desert one time. He narrates the tale as his pilot self, recounting the story of a small person who approached him while trying to repair the plane. This character claimed to be an alien who had been hopping planets prior to landing on Earth, which had such tremendous gravity that it couldn't be escaped. Being stuck on Earth didn't seem to be a particular bother other than the fact that it meant he could never return to his origin planet, where his favorite flower lived. The story turns into a narrative on the nature of love and loss as the alien prince tries to adapt to the ways of Earth and comments on all the strange characters he'd met on the various planets prior to getting marooned in the desert with the pilot.</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_werv87ARCfeyb-lEhhIAo7bjfGtjsaY813Gs7vY5pHTXJw9NUTRcnIgD67QZTv_O3zPcqcG4XoMMVDRx2TN67GHBAn1R4_9q1EM1TmWx61IeskM_8CHxMmZA4BzLRf_vsZ60ckpyCZ4/s372/BusinessMan.jpg" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="287" data-original-width="372" height="154" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_werv87ARCfeyb-lEhhIAo7bjfGtjsaY813Gs7vY5pHTXJw9NUTRcnIgD67QZTv_O3zPcqcG4XoMMVDRx2TN67GHBAn1R4_9q1EM1TmWx61IeskM_8CHxMmZA4BzLRf_vsZ60ckpyCZ4/w200-h154/BusinessMan.jpg" width="200" /></a></div><p>I recount this tale because I've been thinking about cartography a good deal recently. The efforts of various companies to create proprietary maps remind me of the "businessman" character in Antoine's story. (See <a href="https://albalearning.com/audiolibros/exupery/elprincipito-en.html" target="_blank">chapter 13</a>.) He was too busy to talk to our protagonist because he was tabulating all the stars. He asserted that by keeping record of the stars he was owning them. The prince channels the perspective of Chief Seattle that it is preposterous to assert that one can own nature. (In Chief Seattle's case it was native tribal land he was <a href="https://suquamish.nsn.us/home/about-us/chief-seattle-speech/" target="_blank">being ask to sell</a> to the US government.) The augmented reality mapping community isn't trying to own spaces, but rather create a utilitarian linking model, metaphorically similar to the Global Positioning System (GPS) but specifically for the shared context between people for use in location-based content discovery and information sharing. </p><p>You might think we could just have one app to rule them all, and that all location-based content should be visible within that. But this limits utility. It calls to mind the parable of <a href="https://en.wikipedia.org/wiki/Shantideva" target="_blank">Śāntideva,</a> the Tibetan monk who mused:</p><p>“Where would I find enough leather<br />To cover the entire surface of the earth?<br />But with leather soles beneath my feet,<br />It’s as if the whole world has been covered.” <a href="https://www.goodreads.com/author/quotes/29132._ntideva" target="_blank">*</a><br /></p><p></p><p>Getting everyone on Earth to use one tool in order to benefit from location-based data isn't feasible nor optimal, any more than covering the earth with leather. It would spur a new digital divide problem on top of the one we already have, and become too cumbersome to maintain and update securely. The internet is a more scalable approach than an app-siloed approach too. The web's pooled effort of millions of developers, curators and contributing users are already adept at working on rich content generation, with adaptable sharing, access and security already taken care of. So in the location-based web, we have to think about renderings of web content in a way that can be shared by thousands of browsers, apps and location-"aware" tools that don't have screens. This will allow individuals worldwide to put soles beneath their feet to navigate this information terrain in diverse and dynamic ways.<br /></p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://location.services.mozilla.com/map#0.7/0/0" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;" target="_blank"><img alt="Mozilla Location Services map of radio waves" border="0" data-original-height="1070" data-original-width="1446" height="237" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjx1W9nUZDcwVE9QCF8i85NY9xjm1pNqfHnf3iaw74gyuyz_p2hs-XehjUyUYr47dgv32-r7XZHrQcCeXmfrJ2VzCmb47sHVu3jBe-IEHghXEY1Z4eA9gjSNfw_FJCh60KWa5x6qEjPd-g/w320-h237/US+Radio+Location+Map.png" width="320" /></a></div>While one can't directly own locations on a map, we can indeed create facsimile versions of real space and thereafter build multiple virtual content layers atop that. In <a href="https://www.mozilla.org/en-US/firefox/" target="_blank">Mozilla</a> (a non-profit open source web development company) we had a mapping initiative that allowed location based radio wave triangulation to be charted across multiple wavelengths (cell-tower for long range, wifi for short range) stored in a digital repository for developers to build upon. Called <a href="https://location.services.mozilla.com/map#0.7/0/0" target="_blank">Mozilla Location Services</a>, we were able to leverage public frequency radio triangulation for any place in the world that had telephony coverage to be a navigable utility layer for posting or pulling web content. We could then build our own content layers and mapping tools that could pin to locations in the physical world. <div><br /></div><div>What is the best user experience to navigate this “augmented” coordinate space? Unless a browser had a way of querying for location and rendering the pinned object or content, nobody would see it. My friend <a href="https://twitter.com/sometexthere" target="_blank">Ben Morrow</a>, who has been studying the AR space for years, explained, “This is a search and discovery problem. You have to ensure that whatever space you're in isn’t overburdened by irrelevant AR content in your space, which would result in an ‘AR Garbage World,’ or if you’re in a sparsely curated space you don’t experience the inverse, an ‘AR Desert World,’ where there is nothing of value for you to draw from.” <span class="tojvnm2t a6sixzi8 abs2jz4q a8s20v7p t1p8iaqh k5wvi7nf q3lfd5jv pk4s997a bipmatt0 cebpdrjk qowsmv63 owwhemhu dp1hu0rb dhp61c6y iyyx5f41">So
to deliver this, there needs to be a public-visibility of assets
that are available in a specific location through a passive or direct
querying utility, but with acceptable level of user control and
preferences to give the user filtering control of what needs to be seen, when.</span> (Imagine strolling by a "discoverable object" at a coordinate near you. You might look for specific content, like a location-based list of places to eat near you. But in this context, can content surface to you without downloading a specific app bespoke for that content?) TV and terrestrial radio is a content layer of real-time streaming video/radio abstracted <i>away from</i> specific coordinate locations. The spatial web content layer would be overlapping directly in places that are shared by millions of people daily. Navigating those layers should be easier than downloading a unique viewer for each channel. We have thousands of individual apps right now that render specific views of locally posted data. But we don't have a standardized approach such that any one of those app viewers could tune into all the channels of the others the way a TV or radio can easily switch between radio wavelengths.<br /><p></p><p></p><p>To build this shared-content layer, we need engineers, artists, business teams and shared standards for engagement and architecture across millions of personal navigation devices. Rather than the celestial businessman who seeks to chart the stars alone, we need creators with the philosophy of an open-source cartographer. When the world-wide-web of hypertext documents charted the course for our internet of today, it started with a small interlinked corpus of nodes, which then expanded exponentially in the 1990s as millions of contributors posted and linked their own creations with others. I remember early approaches for content sharing and navigation depended on web-rings where site hosts would link their content to other sites of known related content. Bloggers created their own specialized hubs of niche content. Companies like Yahoo, Excite and LookSmart hired hundreds of content curators to pull in sources of new content as the web grew. Netscape’s Open Directory Project and Wikimedia achieved similar content mapping databases for multiple languages around the world even without a corporate sponsored business model. Then once, the scale of web content grew faster than could be aggregated or charted efficiently a purely algorithmic model was needed to scale the corpus of the web efficiently, leading to the web crawlers and Boolean web query models we have for sifting content today.</p><p>Web search engines of today didn’t yield their value from a top-down approach, but rather a bottom up. They gained their utility for us by mimicking the way that web developers inter-link and label their content for discovery. Early algorithmic search engines (Altavista, Hotbot, Inktomi, Fast, Bing, Google for instance) would crawl the web much like a spider, following the way each thread of the web linked to every other and ascribing weighted value to different kinds of connections site hosts would paste into their html by way of “anchor-text links.” It was crowd-sourced in a way that made it more dynamic and resilient than any single company or group could create. </p><p>The spatial web will be woven similarly, in a crowd-sourced way with location reviews of restaurant goers, measured velocity of sharing actions and graffiti-style content postings that happen in the physical world. The search engines of this cartographic space will prove their value by filtering millions of tread paths of those wandering its trails and leaving notations for others, reinforcing the well-trod paths as potentially of utility to others just like the keyword searching model we’ve become accustomed to, which replaced the editorially curated/recommended web indexes of the past.</p><p>While the location-based internet has had many exciting phases of evolution over the past decades, it is rising to a fever pitch these days as a broad array of well-funded companies are starting to plan new utilities to map digital services over our shared physical space. The current pace of technology advancement here is a fascinating rabbit hole of strategies for cartography enthusiasts like me. <br /></p><p></p><p>I became particularly interested in cartography during my international travels (50 countries to date) when I would use tools derived from the <a href="https://www.openstreetmap.org/search?query=san%20francisco#map=11/37.7852/-122.7277" target="_blank">Open Street Map</a>'s open source database of physical locations to plan, journal and photographically document my journeys. Sometimes I was able to find my way about with digital maps in a way that was not possible with market-available paper maps that I'd bought. (In many cases, the places I'd go were not mapped precisely by Nokia Here, Bing nor Google Maps in a useful way. For dense cities, those services are great. But for the digitally disconnected regions of the world, they don't provide the anticipated utility.) </p><p>I care about expanding this portion of our technology because I believe there is tremendous good that we can do for our community, and those who come after us, by facilitating greater levels of information layering on our physical world and the chronicling of information that can be available to people anywhere specific at a moment's search. There are tremendous resources that we've created in the corpus of the web that are not directly accessible in a given location without particular forward-leaning effort with keyword querying of web content. That's something we can address with good tools and intuitive user experience models that make the web generally more applicable to the daily lives of people in specific places.<br /></p><p>Now that Virtual Reality gaming has advanced to a relatively mature market, several companies are now pushing the 3D models of gaming engines (such as <a href="https://unity.com/products/unity-mars" target="_blank">Unity</a> and <a href="https://docs.unrealengine.com/en-US/SharingAndReleasing/XRDevelopment/AR/HandheldAR/AROverview/index.html" target="_blank">Unreal</a>) to create simulated virtual overlays of the shared space we live in. You may be concerned about these advancements from a safety perspective, with good cause. Distractions can be a danger in any context. The hazards of digital overlays in the real world are already being tested by pioneering companies who will be mindful of this. With Niantic's gaming layer (discovered through mobile phone screens, and eventually digital glasses) there can be risks of people being distracted from real-world safety issues. Niantic themselves have taken care to ensure that users don't engage with <a href="https://www.harrypotterwizardsunite.com/" target="_blank">their games</a> in ways that would put them in physical danger. With the release of more eyeglass products from <a href="https://www.spectacles.com/" target="_blank">Snapchat</a>, <a href="https://www.nreal.ai/" target="_blank">Nreal</a> and <a href="https://about.fb.com/news/2020/09/announcing-project-aria-a-research-project-on-the-future-of-wearable-ar/" target="_blank">Facebook Aria</a>, we'll start to see more social utility, services and commerce advance this trend beyond the well-established use of Augmented Reality in the workplace, which has been the focus of earlier efforts of <a href="https://www.microsoft.com/en-us/hololens" target="_blank">Microsoft</a>, <a href="https://www.google.com/glass/start/" target="_blank">Google</a> and <a href="https://www.magicleap.com/en-us/magic-leap-1" target="_blank">Magic Leap</a>. Just as the mobile phone and automobile industry have taken measures to reduce distracted driving, caused by the proliferation of mobile phone usage in the wrong contexts, we can anticipate the industry addressing our concerns about such distractions outside of the driving context as well as these "mobile glasses browsers" become common in public.<br /></p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://www.houstonchronicle.com/techburger/article/The-strange-tale-of-Monocle-the-AR-pioneer-12371889.php" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;" target="_blank"><img alt="Yelp Monocle AR review overlay" border="0" data-original-height="1152" data-original-width="2048" height="181" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9g73c2xVM0xDo6en3295z26YWb_fJTjOyFwVnm9LgE9VF26Kn_K7CSTO_xeUpcDvUXFLYOB04SLTh2kMCewKYIijSpzF8RGD7Z9R8P8MgrAruz_2Pu7MElSebc9k0SKjBkWv2gVWbr8c/w320-h181/Monocle.jpg" width="320" /></a></div>It is very easy to have content layers that trigger in-app content specific to only one developer, the way that <a href="https://www.houstonchronicle.com/techburger/article/The-strange-tale-of-Monocle-the-AR-pioneer-12371889.php" target="_blank">Yelp Monocle</a> showed AR overlays of the real world camera view, but only showed Yelp content. But what will be the next giant leap is to see when content can be app-agnostic, the way web pages are. We already have the file formatting for representing spatial objects in a portable way. (kml and glb are 3D equivalents of jpeg/mpeg for instance) We next need to implement a shared/public hosting repository where digital assets can then be queried in a location by a means contextually relevant to the need of the user. Due to the cost of infrastructure necessary to support location content discovery, there needs to be an appropriate business model for sustaining the underlying architecture. <br /><p></p><p>Before departing the Google Maps division, the Niantic project had an interesting angle for this. During the development of their first game, Ingress, I met the team at the San Francisco Game Developers' Conference. They explained
that there was a commercial value to moving people around in places. If
they were to place an Ingress portal in a location that happened to be a
store, those people playing Ingress nearby might eventually walk into
that store to buy something. So location-based incentives may be the economic value to
a cartographic platform that could influence where people decide to go in a specific location. This is also of great value for travelers, who are actively looking for tips on how to discover a location and which places to visit, book lodging with, or discover culinary and cultural opportunities.<br /></p><p>Around the same time as that conference, I'd learned of a company that was creating a
real-world Pac Man game that would drop little digital treats on their
driving game to encourage people to create a digital layer of mapping
that they could claim to be their proprietary travel-based index of the world. At a TEDx talk in Silicon Valley their product manager described
the perils map routing. She suggested that the Waze app could leverage the willingness to explore of their users, somewhat like a group of foraging ants, to "<a href="https://en.wikipedia.org/wiki/Random_walk" target="_blank">random-walk</a>" the Earth until they knew the real-time travel
velocity of every navigable path on the planet. It struck me that this
was a slight ecological nightmare if it was gamifying the concept of
driving the roundabout way, just for a digital treat, instead of
navigating in a straight path, which would require the least expenditure
of resources, even if not time. Perhaps over time the initial waste of the indirect-route incentive would lead to traffic reduction for subsequent users who might not have known about an alternative efficient route that was better than the route that a larger map service would recommend. Ultimately, Google decided that the gamified approach of Waze and the active "contributor" community of people reporting road hazards and route feedback was enough incremental value over Google Maps alone to merit an acquisition of
the company, while the decided to spin off the Niantic project as a stand-alone company.</p><p>If you happen to be a VR enthusiast, you probably have seen the rich 3D environments that Google Earth have rendered specifically for VR-ready devices. You can fly though the streets of New York, Chicago or San Francisco seeing a spatially accurate depiction of these major cities. You may then be surprised to see that a similar level of effort has not been put into cultural or historically significant world locations. The reasons may seem obvious. The city-scapes are assembled using Google's lidar photography (Such as the cameras on Waymo cars) combined with known information about the buildings that make up the city's topographical 3D structure. Obviously you can't drive a Waymo car through Chichén Itzá. An altogether different approach needs to be applied. One of Google's contributors explained that for areas that are not navigable by car to map with lidar, you must use aerial photography with photogrammetry (stitching of photos from different angles) to create depth maps. Putting San Francisco on the map is therefore, very easy. Putting Chichén Itzá, Petra or Angkor Wat on the map is stunningly hard. It's a finite expense, but the benefit from doing it doesn't yet outweigh the cost of doing it. It's just a matter of time and logistics though.</p><p>Considering the economic problem that leaves so much of Google Earth unmapped in 3D relief, you might wonder how AR is going to be any different. This is an issue that a bunch of people are trying to solve actively right now. The good news is we can use the hive-like behaviors of humans to accomplish it, just like the Waze team did for the streets of the US. But there needs to be some benefit or value of participating that incentivizes people to do the contribution work necessary to make a good map. That's where people like me come in. As a mapping contributor, uploading photos, making reviews and leaving tips for travelers, I'm doing the necessary work to create the content rich location layer across that will eventually populate our smart-glasses, automobile dashboard screens and social traveling apps. </p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhjSb2m-pxmWFiWrXAhej9AVDvTIvugFEHJkfD27ZbulPeAMN9jd6ODBzeEE6tQcFZbOUAKUWDjzmwLhtU64wLT7qebTrWsM6Ro2SX6ebAPZ_IyIZTRPkv44nSN6FFJQFZUMe_Ph-o9QSk/s1294/ADHere.jpg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="1294" data-original-width="1038" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhjSb2m-pxmWFiWrXAhej9AVDvTIvugFEHJkfD27ZbulPeAMN9jd6ODBzeEE6tQcFZbOUAKUWDjzmwLhtU64wLT7qebTrWsM6Ro2SX6ebAPZ_IyIZTRPkv44nSN6FFJQFZUMe_Ph-o9QSk/w321-h400/ADHere.jpg" width="321" /></a></div>If it all the work had to be done by advertising companies, then every location you go to you'd be hearing navigation tips like "Turn left at the Wendy's then go straight to the 7-Eleven, where you bear a right, you're at your destination when you can see a Home Depot on the opposite side of the street." All that overt advertising would probably drive attrition of the navigation tools because the heavily-sponsored environment is annoying and distracting over time. It reminds me of the skyscraper in Hong Kong that for years had the giant text "AD HERE" on its Kowloon-facing side. The incentives need to be subtle, like the <span class="ILfuVd"><span class="hgKElc"><span class="aCOpRe"><span>"Pokémon</span></span> Go Incense" and </span></span><span class="ILfuVd"><span class="hgKElc"><span class="ILfuVd"><span class="hgKElc"><span class="aCOpRe"><span>"Poké</span></span></span></span>-l</span></span>ures" that Niantic provides for individuals to draw customers to the proximity of commercial businesses when they need foot traffic. It's more appropriate to leverage context without being so overt as to say, "To get your next Pok<span class="ILfuVd"><span class="hgKElc"><span class="aCOpRe"><span>é</span></span></span></span> Monster, walk into the store next to you and buy a soda, scanning the QR code on its side." You may in fact do those things of your own volition. But being instructed to to them in order to unlock a level of a game is annoying.<br /><p></p><p>While these incentive schemes work well for large companies with a broad user base, smaller developers and content creators will have a challenge trying to create real-world content publishing. For instance, if I had a large photogrammetry map of Chichén Itzá, how could I get that content to someone who could benefit from it? Generally, I'd have to have a content marketing strategy to try to find the person who needs it when they need it. I'd have to place marketing materials at the locations where the need for that 3D map might arise, in the real world, or do marketing campaigns in search engines or travel portals to let people discover it. So not only do I have to pay all the money necessary to accomplish the 3D map generation, I have to ensure people can find it. That's a particularly baroque undertaking. There are indeed people who are working on the first part of the problem, <a href="https://zamaniproject.org/" target="_blank">making the 3D models</a> with lidar and photogrammetry stitching for the benefit of posterity. Yet people with those talents don't typically have the marketing savvy to address the discovery side of the user experience. That's where the free-to-index search architecture that popularized sites like Yahoo!, Bing and Google come in. But the mechanisms that made those companies possible can't be directly extended to solve this search challenge. A new means of sifting data needs to be deployed for this case. <br /></p><div class="separator" style="clear: both; text-align: center;"><a href="https://www.phocuswire.com/Japanese-augmented-reality-destination-app-Sekai-hits-global-stores" rel="nofollow" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;" target="_blank"><img border="0" data-original-height="487" data-original-width="727" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjVCkeJiFcpwgFLbbuN54vX8IN9s3kZg2eLn2aytSfC9pWjDY92d50QzQVwfB4u1Jr0oB00nf6VykV_LK1musQS2mUUJPHoLiZcasUigMV6GJ4ult46EyY5TV4oB_NUff-nuI6n_k1bVFc/s320/sekai1.jpg" width="320" /></a></div>Fortunately the market is evolving significantly to the point that the fundamental tools of AR app development are <a href="https://www.wikitude.com/" target="_blank">available cheaply</a> to any developer. But the free-to-index public repository for location-based content is not yet solved. A few candidate approaches have been proposed in different countries. In Japan, <a href="https://www.japantimes.co.jp/life/2009/10/14/digital/sekai-cameras-new-reality/" target="_blank">Tonchidot</a> suggested the approach of a generally discoverable "air tag" with coordinates and content types that could be visited by a user with idle time using an app capable of posting and rendering the content. <p>In the US, QR-codes are typically used to engage location or topic-specific content queries from a physical sticker or billboard. The team at <a href="https://www.verses.io/" target="_blank">Verses Labs</a> proposes a global domain registry approach for location mapping that is similar to the DNS lookup table approach of the Internet domain registry legacy model. (The administrative body for DNS registrations is called <a href="https://www.icann.org/" target="_blank">ICANN</a>.) Recently, an Estonian company, <a href="https://www.ovr.ai/" target="_blank">Over Holding</a>, has proposed a concept whereby developers share a blockchain record of location assets with a tenancy privilege for developers who publish to or buy the location "hexagon" where the AR content will be placed. This isn't meant as NFT land grab hype. They envision a model whereby virtual estate needs to be able to fluctuate up and down in price in a means similar to the open competitive market that defines price of physical real estate that it is emulating. What <a href="https://en.wikipedia.org/wiki/Clear_Channel_Outdoor" rel="nofollow" target="_blank">Clear Channel</a> (US) and <a href="https://en.wikipedia.org/wiki/Str%C3%B6er" rel="nofollow" target="_blank">Ströer</a> (EU) is to the physical advertising world in billboard marketing, they'll mimic with digital equivalent of "rights to display" in the shared space that developers leverage on their digital monetization architecture. </p><p>While the ICANN domain registry approach yielded many free market search engines in the past, this could be an exceedingly complex centralization effort to run for the entire globe. Whichever approach for location-based posting/indexing emerges, it will need to develop defenses just like the web-index techniques for spam prevention, content preferences, filtering and ephemerality/freshness if it is to become valuable and beneficial to us on a broad scale.<br /></p><p>It will be interesting over the next few years to see how the discoverability and publishing rights for location advance. It's too early to tell whether we'll stay in-app for the next decade or go toward one of these more decentralized models for content sharing. A few more companies need to jump into the pool before a good standardization effort for cross-platform content visibility and share-ability emerges. While it may seem a <a href="https://en.wikipedia.org/wiki/Sisyphus" target="_blank">Sisyphean effort</a> to chronicle and map the world when we don't know yet what the eventual shared standard is going to be, I think it's a valuable expense of resources for the web of tomorrow, which should ideally be specifically relevant to us based on where we are, not just who we are. <b><br /></b></p><br /></div>ncubeeighthttp://www.blogger.com/profile/07874174268038500927noreply@blogger.com0tag:blogger.com,1999:blog-4288746931605915378.post-66639951226967120012021-08-14T10:29:00.000-07:002021-08-14T10:29:27.181-07:00Bridging the uncanny chasm<p>I remember my first experience visiting the <a href="https://omsi.edu/" target="_blank">Oregon Museum of Science and Industry</a> in Portland when I was a teenager. I had always been fascinated with sciences. So a playground of interactive exhibits with hands-on experiments was just my kind of thing. I particularly recall my first meeting a conversational chat-bot that they had installed which could output considerate responses to questions the user typed in. Dr. Know, as it was affectionately named, was an instantiation of the <a href="https://en.wikipedia.org/wiki/ELIZA" target="_blank">Eliza</a> program from MIT Artificial Intelligence Laboratory. It was an example of a Turing test, an idea put forward by <a href="https://en.wikipedia.org/wiki/Alan_Turing" target="_blank">Alan Turing</a>, an early pioneer in computer programming. He theorized that computers would eventually be indistinguishable from humans once their logic structures matched those we form through socialization processes. The <a href="https://en.wikipedia.org/wiki/Turing_test" target="_blank">Turing test</a> is a process of inquiry followed between a human and a robot that would be used to figure out if the robot was sentient or not. (Ironically, on the internet, servers spend a considerable
amount of time giving internet users Turing tests before they allow us
to view websites. These "Completely automated public Turing tests to tell
computers and humans apart," CAPTCHAs for short, are meant to lessen the
amount of time that web servers spend dialoguing amongst themselves
without benefiting humanity in some way.)</p><div><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjSkEIy-4SPsrpd2Xnw4napDr4pWuO4o-E0Zm1kt_VIt3aZeyYu_HXgU1mvrthuYhPj5Oc-H-8d0GUWe7VyICKj_RFTwXv0YLj4OTQuOCwylKGE9fjUBfOciKN_w4oWbCpgL1Idr04WGCY/s495/ELIZA_conversation.png" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="321" data-original-width="495" height="208" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjSkEIy-4SPsrpd2Xnw4napDr4pWuO4o-E0Zm1kt_VIt3aZeyYu_HXgU1mvrthuYhPj5Oc-H-8d0GUWe7VyICKj_RFTwXv0YLj4OTQuOCwylKGE9fjUBfOciKN_w4oWbCpgL1Idr04WGCY/s320/ELIZA_conversation.png" width="320" /></a>These days, a majority of American households have at least one device that is capable of representing an interface to a conversational virtual assistant. Computer vendors have embedded speech-to-text inputs in hardware they vend to encourage us to speak questions to their respective cloud service virtual assistants. (Cortana, Siri, Alexa, Alice, Google Assistant) Their goal is to eventually obviate the need for keyboard entry for the next generation brain-computer interfaces. Staring at phone screens can be fun. But it can distract humans from leading normal lives rich with interpersonal social interaction, with humans. A lot of our time each day we spend talking through our fingers using secondary representations of language. If enough people offer up their voice patterns, computers can learn all accents to thereafter bypass the alphabetic language we typically type at them. Once they speak the same language as us, without the distance of the plastic/silicon intermediary of the computer screen, our near and distant relationships will return to a more normal means of communicating, and we’ll spend far less of our lives communicating through finger gestures.</div><div><br /></div><div>Each of us have had our own experiences conducting Turing tests with machines in the home or on phone lines, trying to navigate their logic structures and capabilities. I’ve seen lots of failed attempts to bridge what Masahiro Mori called “<a href="https://en.wikipedia.org/wiki/Uncanny_valley" target="_blank">The Uncanny Valley</a>” of foreignness that divides humans and machines, preventing them from forming comfortable trust-based relationships. I’m fascinated to watch the emotions people use to express themselves when they know their counterpart is not human. I’m impressed with the tricks the tech companies use to embed implied emotion through tone of voice their virtual assistants speak at us. Ironically, it’s usually the humans that sound like robots in these interactions, with flat matter-of-fact insisting rather than sing-song tone of voice typically used among fellow humans we seek to convince, sway or implore. </div><div><br /></div><div>Some people bristle at the idea that we would refer to a machine-learning algorithm plus a database as an artificial intelligence. It is as if the term intelligence needs to be an honorary title conferred only on those of us that have and express emotion, beyond vocal inflections. When I was testing the Turing computer in OMSI (or it was testing me) I was enthralled to see that the machine could grapple with the rules engine we use to socialize. The size of its knowledge database didn’t matter to me so much as its deft capability to generate an acceptable output response to well-framed questions. It took me about 30 minutes before I had mapped out the logic frameworks the programmers had used in the conversational flows. I could eventually predict how it was going to answer any question I posed. After that happened, I became satiated that I understood this mechanical friend well enough and I could go on with my day.</div><div><br /></div><div>As children, we interface with the world intensely to find the external connections that give us pleasurable or negative reactions. I walked away from Dr. Know feeling like it had dead-ended too many of my questions. I'd gotten to the end of the dialog maze. It's a similar experience to what a lot of us have with smart speakers these days. They sometimes can’t progress in a dialog unless we buy something or augment them with some third party “skill” not readily on hand. As we come to depend more on virtual companions we don't want to hear, "<a href="https://vimeo.com/168231092" target="_blank">I'm sorry Dave, I'm afraid I can't do that</a>" when we're facing time-critical challenges. Human patience has a much shorter fuse than bot patience.<br /></div><div> </div><div>I recently joined a company, <a href="http://akin.com/" target="_blank">Akin</a>, that is developing AI for use in assisting people with exactly those time-critical decisions and actions. My team, who formerly worked on the Watson platform at IBM, are extending the use of AI technology to help engineers working in complex assembly contexts and in families coordinating inter-dependencies across the family unit. Watson is the conversational AI that was designed, like Eliza, to banter back-and-forth in dialog with random questions. It was famous for <a href="https://en.wikipedia.org/wiki/Watson_(computer)" target="_blank">beating Jeopardy champion</a> Ken Jennings at his own game. Knowledge games are an area where AI can outmaneuver humans increasingly over time as <a href="https://en.wikipedia.org/wiki/Moore%27s_law" target="_blank">Moore’s Law</a> of increasing computer power favors machines. Humans can’t scale at the same rate unless they augment their capacity with external data sources, other people, or data from the internet.</div><div><br /></div><div>AI platforms can be specifically good at repetitive tasks, (Set a timer; Turn on lights; Turn up the volume) state monitoring (Weather forecasts; There's somebody at the door; Your package will arrive tomorrow). AI assistants can enhance our effectiveness by helping to stay on task and focused to avoid distraction. Beyond just assisting, we are learning more creative applications of AI where they are able to significantly advance in fields of inference such as pattern recognition, protein folding and vaccine creation. The more we can delegate tasks to AI, the more we free the human mind from burdensome task management so that our minds can exercise their own strengths.</div><div><br /></div><div>I look forward to the point that we can engage and collaborate more with our in-home AIs. Relationships thrive when the output is more than just the sum of inputs. We want to have robotic assistants that do far better than just what they’re told, or respond factually to what they’re asked. Just as friends propel us to maximize our own personal potential, AI assistants should be able to amplify our efforts toward a goal. To do this they have to get beyond our human tendency to repel things that are uncannily familiar. We need to find ways to let them draw closer, allow us to invest more of ourselves and rely more on them. In order to build trust with Akin’s AI assistants, the founders have incorporated as a public benefit corporation and are working with universities and health researchers to measure the quality of life benefit that results in using Akin AI in the home. </div><div><br /></div><div>We look forward to sharing more of our advances over the coming years.</div><div><br /></div><div><br /></div><div><br /></div><div><br /></div>ncubeeighthttp://www.blogger.com/profile/07874174268038500927noreply@blogger.com0tag:blogger.com,1999:blog-4288746931605915378.post-35807413177105332032021-03-03T08:00:00.056-08:002021-03-22T12:50:33.535-07:00Defending against quantum-computer hacking using biometrics<p>In 1978, BBC radio ran a satirical exposé about a group of hyper-intelligent pan-dimensional beings who were trying to get inside the brain of a human named Arthur Dent, who was a fugitive from the planet Earth. The radio show was so popular that it went on to become a book, a TV series, a movie and recently has been re-released on CD for future generations in its original audio format. </p><p></p><p>The story, called The Hitchhiker's Guide to the Galaxy, was told from the perspective of Arthur, who had no idea the importance of his former home, Earth, which had been destroyed shortly after he escaped. Neither did Arthur have an understanding of the tremendous significance of his own brain. The pan-dimensional beings, which appeared to him as mice, were actually the administrators of a massive planet-sized computer. They regarded Arthur as just a circuit in the computer that they managed. The mice had been conducting experiments on humans from inside Skinner boxes in the laboratories of human psychologists for many years. (Meanwhile, the psychologists believed it was they who were experimenting on the mice!) </p><p></p><p></p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh_Rpa2qnuz2xklffI7lysqORNTlcHhNvGZALINO-BCAq1gwLnFr-lAlUwL_7Z05oRbkbehcIjby8HwiJuGspE3mzt6058i2HcOu-iPKPkSlEhNFK9ACB0lNb9eTZS2K9Fp5d2YpR9FNM0/s1188/Earth.png" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1174" data-original-width="1188" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh_Rpa2qnuz2xklffI7lysqORNTlcHhNvGZALINO-BCAq1gwLnFr-lAlUwL_7Z05oRbkbehcIjby8HwiJuGspE3mzt6058i2HcOu-iPKPkSlEhNFK9ACB0lNb9eTZS2K9Fp5d2YpR9FNM0/s320/Earth.png" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span style="font-size: x-small;">Deep Thought, on the future computer, Earth</span></td></tr></tbody></table>The Earth had been planned by another more primitive computer, called Deep Thought, which argued that life forms would be the circuits of this great future computer that could come up with the great question of "life the universe and everything." Humans would therefore wander its, staring at Brownian motion in their tea cups, pondering the existential question for the benefit of all sentient beings in the galaxy, who eagerly awaited the outcome. Deep Thought had already revealed that the ultimate answer was 42. But determining the ultimate question would require millennia of pondering. The story alleges that finally a woman in a cafe in London had come up with the ultimate question while staring at her tea. This made the unfortunate destruction of the Earth very frustrating to the mice who'd been working on the surface of the planet for millennia prior. The mystery of the ultimate question was to remain hidden until the Earth could be replaced.<br /><p></p><p></p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPAsYntGXQ1Nn2P1FmTNgpvjzIKVBLqAJg7YPzbibIQoVIwjHtmr-sL2tdSE9j0S-qJFcC5ggr0W5fiZcT9WO37KzwAoBEMMk1lg2qC7GnGDKBdhR7cVjowXkDQuMfTg12wXKCxhdhyac/s1560/Mice%2526Arthur.png" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1260" data-original-width="1560" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPAsYntGXQ1Nn2P1FmTNgpvjzIKVBLqAJg7YPzbibIQoVIwjHtmr-sL2tdSE9j0S-qJFcC5ggr0W5fiZcT9WO37KzwAoBEMMk1lg2qC7GnGDKBdhR7cVjowXkDQuMfTg12wXKCxhdhyac/s320/Mice%2526Arthur.png" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span style="font-size: x-small;">Martin Freeman as Arthur, facing interrogation by mice</span><br /></td></tr></tbody></table>However, the mice had reason to believe Arthur might have remnants of the great question inside his mind, as he was the last vestige of the original Earth program. From their perspective this secret question was much more valuable to the galaxy than his brain was of value to Arthur. Yet Arthur was reluctant to yield his brain. He was ultimately able to avoid being diced up by the mice. But he did end up marooned on the surface of Earth Mark II, a rebuilt computer based on the original Earth blueprints after the mice decided to start again from scratch. (Find out what happened to Arthur on Earth Mark II, starting at Episode 6 of the BBC radio show to hear of his adventures thereafter.)<br /><p></p><p>Rising up from the perspective of the radio series, we may assume that we humans are actually on the second coming of Earth. In our own narrative, humans have just built their own hyper-intelligent computers which we call quantum computers. These crafty computers have circuits that can think three-thoughts instead of just two as with their predecessors, binary computers. A great deal can be achieved by allowing a circuit to go from thinking yes/no (0 or 1 in a logic gateway) to thinking yes/no/maybe! Just two years ago in Nature magazine we read pronouncements that a group of scientists had used a quantum array of circuits to demonstrate “<a href="https://www.nature.com/articles/s41586-019-1666-5" target="_blank">Quantum Supremacy</a>” for their particular computer in terms of a high-speed of calculation. While this is great news for anyone <i>with</i> a quantum computer, it was suddenly bad news for everyone else's non-quantum computers, as it implied that the rest of us would have to go back to the drawing board to try to figure out how to secure our binary-logic computers and computer networks that were suddenly deemed less supreme. <br /></p><p>There may be no hyper-intelligent pan-dimensional beings trying to hack our
skulls. But there are a bunch of ordinary folk who plan to use these computers, like those pesky mice, to peer into our networks and steal our secret questions, as they've been doing with
binary computers and phishing exploits for decades. Our legacy means of encrypting networks have
been based on introducing complex hashes of data. Introducing hashes with mathematical complexity, referred to as "introducing entropy" or "cryptographic salts," makes the decryption of such data without access to keys too complex for a binary computer. As we saw with the <a href="https://en.wikipedia.org/wiki/Sycamore_processor" target="_blank">Sycamore</a> quantum array, a process that could take 10,000 years for a binary super-computer, takes mere 200 seconds for a quantum array. (This was followed by the <a href="https://www.sciencenews.org/article/new-light-based-quantum-computer-jiuzhang-supremacy" target="_blank">Jiuzhang</a> computer which claimed to be even faster.) Theoretically such a fast computation process could be used to apply <a href="https://www.quantiki.org/wiki/shors-factoring-algorithm" target="_blank">Shor's algorithm</a> to factor RSA-level
crypto-keys while the keys were still in use. This implies that we need to introduce greater
algorithmic complexity to eliminate this vulnerability should such computers be used for decryption in the future. <br /></p><p><a href="https://www.post-quantum.com/" target="_blank">PQ Solutions</a>, has been working on this challenge of protecting legacy networks and software from threats emerging in the post-quantum era with a means that is both backward compatible with RSA networks, yet future-proof against decryption attacks regardless of computer speed. While we're currently standardizing this cryptographic approach with the <a href="https://csrc.nist.gov/projects/post-quantum-cryptography" target="_blank">US Department of Commerce NIST</a> working group, we are also introducing products in the market today to allow other companies to have cryptographic-agility to layer in this new standard once the NIST process is complete. (Final Post-quantum cryptographic standards will be announced next year and required for all government service providers thereafter.)<br /></p><p>Our identity validation platform <a href="https://www.nomidio.com/" target="_blank">Nomidio</a>, allows companies to ensure that they only authenticate users for access to their secure/private networks after they've been biometrically proven to be who they say they are. You may wonder why we think biometrics are the key to quantum-safe encryption. We will be presenting on this topic in the upcoming conference <a href="https://www.quantumbusinesseurope.com/demo-sessions-march-16th-2021/#!" target="_blank">Quantum Business Europe</a>. Please join us if you're available to hear about our products and our philosophy of networks protected against such threats. But for those who can't join the conference demos, I'll elaborate on our approach.<br /></p><p>Computers will be better over time at factoring large numbers which we use in defense against binary computers for RSA encryption. So we need to change the game with something that computers can't factor or decrypt. We can borrow a concept from Deep Thought, that humans are the answer for the challenge. Computer-stored passwords are a vulnerability we all know because they are static in time and typically stored partially in the clear through a process called <a href="https://en.wikipedia.org/wiki/Public-key_cryptography" target="_blank">public-key cryptography</a>. We are among a broad consensus of security companies that advocate for transitioning to passwordless network protection. Just like the increasing incidence of car theft by <a href="https://driving.ca/tesla/auto-news/news/watch-two-thieves-effortlessly-steal-a-tesla-using-a-homemade-antenna" target="_blank">capturing radio signals</a> from the key fobs, we now have to ensure our keys are not left in a place where their signals can be captured. </p><p>PQ Solutions' approach to securing network end-points is by introducing live performance of biometric proofs into the encryption process. Quantum computers can be used to simulate incredibly complex mathematical equations and physical systems. But a quantum computer wouldn't be able to simulate a human. By sampling <span>behavioral elements of a live authentication flow we can ensure machine-based intrusions are not able to access a network or breach static encrypted files signed with the biometric hash. Unlike car keys and their RFID signals, your identity can't be stolen from you. <br /></span></p><p><span>The benefit of using an </span>"Identity as a Service" (IDaaS) platform is that companies don't themselves have to hold any biometric data on their servers. Remember the European privacy regulation <a href="https://gdpr.eu/data-privacy/" target="_blank">GDPR</a> which tightly regulates data collection and protection? That's why a company's chief technical officer does not want to build an in-house biometric database of their users. Nomido IDaaS provides a zero-knowledge cloud-based solution for identity validation so CTOs can delegate access for proven individuals internally, while outsourcing identity proofing in their access management technology stack. Our goal with Nomidio is to give companies vault-like biometric authenticity checks without causing a large data footprint for our relying parties and partners. </p><p>A secure network is like the hull of a submarine. Deep underwater the hull of a submarine is hardened against leaks. If you wanted to put a window in a sumbarine, you'd have to ensure that it was as pressure-tight as the hull of the submarine itself. Nomidio does just that. As a user is granted access into a network they have to provide a multi-factor proof that they are who they say they are. Their biometrics are then woven into the encrypted session access key used to grant visibility while certifying that their access token and cannot be duplicated, captured or recreated by any person who is not them. This is beyond just proving they have the phone or the RSA key-fob of the formerly-approved employee, as in the case with 2-factor apps or SMS based systems. With Nomidio, a user must match the live facial likeness of an authenticating user, along with an authenticity check of their biometric voiceprint as they log in. Both of these, as separate factors in the multi-factor authentication, cannot be attached to each other and cannot pass based on past recordings or images of the same person.<br /></p><p>If you're interested to learn more, attend our <a href="https://www.quantumbusinesseurope.com/demo-sessions-march-16th-2021/#!" target="_blank">free-pass demonstrations</a> at Quantum Business Europe, or visit <a href="http://nomidio.com">nomidio.com</a> to learn how to integrate using open protocols OpenID or SAML. We provide a 1 month free access demo account in Amazon Web Services and Azure Marketplace.<br /></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p>Please enjoy our videos from the Post Quantum Europe conference. (n.b. PQ Solutions is a platinum sponsor of this conference.)<br /></p><ul style="text-align: left;"><li>Christopher's speech on leveraging biometrics in network defense:<a href="https://www.youtube.com/watch?v=QUcTUtNp-F4 " target="_blank"> Link</a> to Youtube</li><li>Andersen Cheng's summary of the timing of the quantum hacking threat: <a href="https://www.youtube.com/watch?v=0k03-uAZVlA" target="_blank">Link</a> to Youtube</li><li>CJ Tjhai's presentation on hybrid encryption with post-quantum cryptography: <a href="https://www.youtube.com/watch?v=TcfauQrODjk" target="_blank">Link</a> to Youtube <br /></li></ul><p></p><p><br />For more information on Hitchhiker's Guide to the Galaxy visit:<br />2005 Cinema version: <a href="https://www.imdb.com/title/tt0371724/">https://www.imdb.com/title/tt0371724/</a> BBC TV version: <a href="https://www.justwatch.com/us/tv-show/the-hitchhikers-guide-to-the-galaxy">https://www.justwatch.com/us/tv-show/the-hitchhikers-guide-to-the-galaxy</a> BBC HHGTTG Legacy Link: <a href="https://www.bbc.co.uk/programmes/b03v379k/episodes/guide">https://www.bbc.co.uk/programmes/b03v379k/episodes/guide</a> Recently re-published CD collections: <a href="https://www.amazon.com/Hitchhikers-Guide-Galaxy-Primary-Phase/dp/1787533204">https://www.amazon.com/Hitchhikers-Guide-Galaxy-Primary-Phase/dp/1787533204</a></p><p> <br /></p><br /><br /><br /><br /><br />ncubeeighthttp://www.blogger.com/profile/07874174268038500927noreply@blogger.com0tag:blogger.com,1999:blog-4288746931605915378.post-33163833357774545202021-02-09T18:57:00.061-08:002021-02-24T15:08:07.487-08:00From cosmology to quantum computers<p>Over the past few decades, the concepts of quantum mechanics have been sprinkled into our common lingo and news headlines in a gradual crescendo. Concepts that were mere theories at the beginning of the last century are now demonstrable facts. As uneasily as we sit with their implications, we carry about their applied technologies daily in our pockets. These concepts have been fascinating for me to study. Though they may seem a bit esoteric to most people, they are becoming ever more personally relevant to our common experiences. </p><p>Some concepts that are astounding, but that are increasingly ordinary in our daily lives:</p><ul style="text-align: left;"><li>We control the quantum jumps of electrons to emit photons of the exact spectrum we want to light our living rooms with. Vice-versa, we use photons of specific spectra to control where we want electrons to go and how we want them to behave. (Photovoltaic effects, common in our technology as the light-emitting diode, also used to detect perturbations caused by the presence of a finger on a cellphone screen through electromagnetic repulsion, which sends signals to our computers chips.) <br /></li><li>We've mastered material science to force electrons into super-chilled
wave-crystal states that allow us to research un-Earthly states of
matter that exist in few places in our universe at its current temperature. (Bose-Einstein condensates, which have bearing on superconductors and may at some point also be applied to our technology decades from now.) <br /></li><li>We have been able to entangle wave-states of paired quantum particles, then beam them over distances in a vacuum, to then <i>read</i> the paired state information at a significant distance from the split. (Quantum teleportation, which may someday lead to entangled-particle communication many decades from now.)<br /></li><li>We've built computers that can write to and read from single quantum particles in a super-imposed wave state of two spins. (Quantum computing, just entering into application recently and soon to be used at a greater scale.) <br /></li></ul><p>It's the last item I'd like to address in this blog post. Just last year, a quantum computer proved "<a href="https://www.nature.com/articles/d41586-019-03213-z" target="_blank">supremacy</a>" over classical computer circuits in calculation speed. While our ability to harness and control quantum particles is
fantastic news for the advancement of our technology. It's a bit of bad
news for our soon-to-be legacy computer encryption approaches. Specifically meaning, its has implications for the continuity of software and internet industries as we've come to depend on them. So it's worthy of attention. Over the next two years you may see or hear a lot more about it.<br /></p><p>It takes a bit of time to explain why one would need to be
"quantum-safe" in the context of these advances. The US government standards body, <a href="https://csrc.nist.gov/projects/post-quantum-cryptography" target="_blank">NIST</a>,
is currently in an open call for proposals to determine the new encryption standards for this "post-quantum" era, the way we speak
of post-modern art. The goal of our government and contributing engineering teams is to protect our
future software industry to prepare for the proliferation of quantum computers and a greater risk to decryption of secure data we use day to day. If you had awoken thinking we were just starting the
quantum era now, you may wonder what is the nature of this era we're moving beyond, and wonder what are these computers are that use quantum effects in their operation. As is usual in my blog, I'd like to start
the story with a digression about why I'm fascinated with this topic. If you haven't followed the emergence of this field, it may be interesting.<br /></p><p>I remember as a teenager listening to a satirical BBC radio show called <a href="https://en.wikipedia.org/wiki/The_Hitchhiker%27s_Guide_to_the_Galaxy" target="_blank">The Hitchhiker’s Guide to the Galaxy</a>. In this show, Douglas Adams postulated a spaceship that could leap through spacetime to different locations simply by calculating the precise "improbability" of the spaceship being at any specific location. This concept was inspired by the idea of the “quantum wave function,” of quantum field theory. The wave function is a probabilistic formula for modeling wave-particle trajectories, applied to interpretation of the debris coming out of atom smashers. (See <a href="https://en.wikipedia.org/wiki/Feynman_diagram" target="_blank">Feynman Diagrams</a> for more on this.) We know that we can’t pin down the precise location of a particle, due to the <a href="https://en.wikipedia.org/wiki/Uncertainty_principle" target="_blank">Heisenberg uncertainty principle</a>, so the wave function is an approximating model using statistics to refer to the locations a particle is most likely to be found. In his narrative, Douglas Adams reversed the concept of the quantum wave function such that a probabilistic calculation <i>caused</i> the effect of making the spaceship appear in random places in the universe. (Description of his hypothetical “improbability drive” is <a href="https://hitchhikers.fandom.com/wiki/Infinite_Improbability_Drive" target="_blank">here</a>.) The idea that particles could hop through spacetime outside of a linear trajectory was my first introduction to what people called the “weirdness” of quantum mechanics. These hops are referred to as <a href="https://en.wikipedia.org/wiki/Quantum_jump" target="_blank">quantum jumps</a>, or in some specific cases <a href="https://en.wikipedia.org/wiki/Quantum_tunnelling" target="_blank">quantum tunneling</a>. </p><p>(Side note: I tend to say “particle-waves” because the term particle is loaded as a concept implying that matter is tangible. Our current approach to describing sub-atomic components is to acknowledge that they behave as <i>both</i> particles and waves depending on how they are measured. We could say, “waves that were formerly known as particles” because
we've learned from the last 100 years that our world is non-tangible in the particulate sense. What we
generally see behave as object-like particles around us are actually energetic waves
that appear as point particles when disturbed or obliterated. That’s the most peculiar thing
about quantum mechanics that has fascinated me. What we think of as
tangible is actually an emergent property of various
energetic fields that react strongly against each other in spacetime. As atoms are mostly empty beyond these energetic wave interactions, it is an odd conception to think that everything
around us that we interact with in tangible ways is mostly empty vacuum in a field
of tightly-bound highly-energetic ripples of force. Another term that took getting used to is the idea of fused time+space that we just call spacetime now because Einstein's theories of relativity demonstrated the inter-connectedness of the dimensions of space with time.)<br /></p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQ1O-cE_SOZwyA0I9z9Ob4njtQw4hsHH6xXnFlK2wNiITPtmKN_djp6aFpQivYnarvW9JMyo3QrcwgRBGrbyP5i072X6SznnI-YnCoWT88_qoe8U_iQeD4kDgl7SSuRxVAgqaqZOJIi14/s2048/Electron_orbitals.png" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="1328" data-original-width="2048" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQ1O-cE_SOZwyA0I9z9Ob4njtQw4hsHH6xXnFlK2wNiITPtmKN_djp6aFpQivYnarvW9JMyo3QrcwgRBGrbyP5i072X6SznnI-YnCoWT88_qoe8U_iQeD4kDgl7SSuRxVAgqaqZOJIi14/s320/Electron_orbitals.png" width="320" /></a></div>When I was studying physics in high school, I remember the day that my professor was discussing the<a href="https://en.wikipedia.org/wiki/Bohr_model" target="_blank"> Bohr model</a> of atomic structure. After lab was done, I went to ask professor Rolfe what the other diagrams were on the back pages of the charts as I paged through them. They were peculiar-looking globular shapes that depicted the electron configurations of various elements in 3D. These maps showed the probabilistic location of electrons on the outer shells of atoms, where their negative charges bulged away from the positively-charged proton. Bohr diagrams depict the atom as a 2 dimensional disk, like the model of planets in our solar system, which makes it very easy to envision their chemical combinations with other atoms. Yet in experimental demonstrations we find that atoms are like bumpy balls of positive and negative charge with multiple poles. Professor Rolfe explained, "We refer to electron positions as 'orbitals', but they actually don't orbit the proton. Rather they buzz around in a field of space at specific areas on the outermost edges of the proton's positive charge field." "Wow! Can we study that next class?" I asked. "No. We have to focus on the curriculum, which is specific to chemical bonds. You'll get to quantum mechanics when you get to college," he explained. <p></p><p>One thing parents and teachers learn about kids is that the best way to challenge and inspire them is to tell them they can't do something! Professor Rolfe saw the twinkle in my eye and knew he was sending me off to the races. I talked with my father about it. He in turn suggested I delve into General Relativity first, and started me on the path toward my fascination with cosmology with a book called "Relativity Visualized." Einstein had been an early contributor to quantum mechanics. But to understand why Einstein was provoked into his study of particle-entanglement, it was important to understand the ideas behind his general and special theories of relativity. From there, I branched out to read more about quantum field theory and the new concepts of how relativity and quantum mechanics should eventually dovetail in cosmology as part of a unified theory of the four fundamental forces of nature. Nuclear strong force (which binds atoms together), nuclear weak force (which causes atomic radiation), electromagnetic force (governing electron and photon behaviors) and gravity (which is currently best described by the geometry of spacetime and has evaded an elegant tie into quantum field theory.)<br /></p><p>Relativity was particularly strange for me to grasp: the equivalence of mass and energy, the inter-connectedness of space and time, the limits of light speed and the warping of the spacetime continuum described by the presence of mass/energy. While it didn't sit well in my Newtonian-focused understanding of space and causality, I could grasp the ideas of relativity's predictions, which are being confirmed on every astronomical observation that has happened since. If you've watched Nova or any other science shows about relativity you may have seen demonstrations of the spacetime continuum as a 2D surface that creates indentations where stars are present. That's indeed one great way to visualize gravitational distortions of spacetime. But when you consider the idea of <a href="https://en.wikipedia.org/wiki/Time_dilation" target="_blank">time dilation</a> for objects moving at near light speed, a different aspect of relativity from the mere presence of matter, you get a slightly more peculiar view of the implications of the nature of our spacetime. You get a sense of our universe being made of a kind of taffy-like constituency. </p><p>The large scale spatial-temporal view of our cosmos is best grasped by thinking of the behaviors of gravity and light acting against the background tapestry of spacetime. Yet light waves, when studied on the microscopic scale behave much like the particles that we accept that we're bodily made of. One way to bridge our thinking from the cosmic scale down to the scale of our substantive selves is to focus on the similarities between electrons and photons. These two wave-particles that make up the dual components of the photoelectric effect are tightly bound to each other and yet seem so vastly different in their natures and observable behaviors. I'm calling it dual components because photons (light waves) are emitted/generated by the hop an electron makes when it leaps from a high-energy orbital to a lower resting state closer to the protons in the atomic nucleus. Conversely if an atom is struck by a photon and its energy absorbed, the electron hops to a higher orbital.</p><p>Photons have the distinction that they can move faster than all other known quantum wave-particles such as electrons and quarks. But the head-scratching really starts once you begin to delve into those specific behaviors of non-photon quanta. Why should the photon be able to travel at the fastest allowable speed known in our universe, but the other quanta are not? The photon is not a charged particle like the electron or quark. That's one clue. Why does not having charge imply that it gets to travel in the fast lane while other quantum waves can't? This puzzle continues to riddle the theorists who've been investigating the explanations such as (currently theoretical) non-space dimensions that we can't perceive directly which bridge our thinking to a hidden mechanics of our cosmos. But I shouldn't get too far down the rabbit hole here. There are great publications to follow on quantum-gravity theories, string theory, black holes and the holographic principle that I encourage interested folk to look up. (<a href="https://www.amazon.com/Brian-Greene/e/B001H6INP0" target="_blank">Greene</a>, <a href="https://www.amazon.com/Stephen-Hawking/e/B000AP5X0M" target="_blank">Hawking</a>, <a href="https://www.amazon.com/Leonard-Susskind/e/B001IGHNBE" target="_blank">Susskind</a> and <a href="https://www.amazon.com/Caleb-Scharf/e/B001JP49Y8" target="_blank">Scharf</a> are good authors for the enthusiastic.) Though I find the cosmology topics fascinating, I'd like to put those concepts aside for the moment and to drill down into the aspects of quantum field theory that specifically apply to our emerging technology applications brought about by harnessing these behaviors for practical purposes by tweaking these tiny waves in our machinery. So leaving the "spacey" aspects of this topic there, I'll progress now the specifics of where we are applying these new capabilities to our machines in the imminent future.</p><p>
</p><p><b>Progress harnessing the weirdness of quantum wave/particles in computers</b></p><p></p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgTBrlbgRqeB0K4DSXqGag7HXLUsc1L-yVEMis1qItqrT57PZ42ArVZYSy4YKA2noGlUjfjR75_34lpEaKA3o6ZD9ANZLcAZFdPHmyNF3IMTbig6p96o2kgb5U54WWibATC4q3u0UhhNUg/" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img alt="IQM Quantum Computer" data-original-height="600" data-original-width="400" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgTBrlbgRqeB0K4DSXqGag7HXLUsc1L-yVEMis1qItqrT57PZ42ArVZYSy4YKA2noGlUjfjR75_34lpEaKA3o6ZD9ANZLcAZFdPHmyNF3IMTbig6p96o2kgb5U54WWibATC4q3u0UhhNUg/w213-h320/400px-IQM_Quantum_Computer_Espoo_Finland.jpg" width="213" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">IQM Quantum Computer<br /></td></tr></tbody></table>Digital technology for the mass consumer market of the last century has been dependent on our exquisite mastery of photo-electric effects in conductive metals, vacuum tubes and silicon wafers. <a href="https://en.wikipedia.org/wiki/Moore%27s_law" target="_blank">Moore’s law</a> has predicted continual leaps in the processing power of binary computer chips over the past decades based on our ability to pack logic gateways densely together on a chip. But at this point the advancement we are making with quantum computers is based on an entirely different nature of the logic gateway itself. It’s not because of just the smallness of our computer circuits that will result in the next leap in computing power. Rather, it’s because we’re jumping beyond the concept of a binary circuit completely. Put most simply, instead of creating a logic gateway of 0 or 1, we can now create a triple logic gateway that consists of 0, 1 and 1⁄2. (Not literally 1⁄2, but a juxtaposition representing 0 and 1, which I'll get to later.) That’s what all the buzz is about. This may seem like a small incremental jump for a single circuit. But on a meta-layer, if every logic gateway of a computer were a 3-bit instead of a 2-bit circuit, the advancement of an array of circuits in terms of computing power would be absolutely tremendous. The quantum computer is just that, an array of quantum bits that can coordinate calculations incredibly rapidly. <p></p><p>In the United States, we had a moment of transition where the FCC and all TV manufacturers made a switch over to preferring <a href="https://www.fcc.gov/general/digital-television" target="_blank">digital over analog</a>
radio transmissions. All old TVs had to use a new digital converter to
receive the new spectrum. Nothing like that is going to happen in this
case. We’ll certainly always be using binary computers. They’re so
useful. And we’re adept at producing them with low resource cost now.
The new computer circuits we’re discussing don’t obviate the need for
earlier technology. They’re just useful for entirely different purposes.
The emergence of the calculator (Professor Rolfe used to call them
"calcu-sooners") didn’t mean the abacus and slide-rule were no longer
useful. Calculators are preferred because of their higher rate of
performing calculations. Same will be so of quantum computers as they
become more common. Anyone with a really powerful calcu-sooner will beat
anyone who tries to conduct the same calculation with a contemporary
binary calculator. For instance, the alleged speed of the computer cited
in the quantum supremacy proof above is such that a calculation which
would take a traditional binary super-computer 10,000 years to complete
could be completed in 200 seconds with a quantum computer. <br /></p><p>"How are they creating these cool calcu-sooners?" you may ask. For this we have to go back into the somewhat spacey stuff again. This time turning from the telescope to the microscope, we adjust the focus of our lens to peer at the nature of space and energy within atoms. It may sound strange to say we need to look at space here. But remember Einstein's direct correlation between energy's presence in spacetime warping it's fabric. Energy can't exist in space without altering it. Theories emerging since relativity have explored and tested this concept of the bindings of wave-particle motion to the underlying spacetime. It is at this sub-atomic scale that we see the peculiar behaviors of matter that yield the opportunity of manipulating the quantum bits in a near-frozen state. </p><p>At the smallest scale, during the “physical” interactions we cause in atom smashers, we observe peculiar states of <a href="https://en.wikipedia.org/wiki/Fermion" target="_blank">fermions</a> and force-particles called <a href="https://en.wikipedia.org/wiki/Boson" target="_blank">bosons</a> that spring out of ordinary atoms. We can observe, at some energy densities, exotic wave-particles springing into existence and nearly immediately disappearing as if they emerged from a sub-strata underneath our visible cosmos that is filled with something else entirely, summoned only by the energy density of these small but intense particle collisions. These exotic states of matter can only have a fleeting existence at our
world's temperatures and densities. (Our universe formerly had much
higher temperature and density 13.8 billion years ago. So these
high-temperature interactions give us a way to glimpse the nature of
wave-particles in spacetime from a different perspective, as they were more commonly once long ago.) </p><p>Going in the opposite temperature
direction, when we chill particles in an area of vacuum as cold as is
possible for humans, we see other exotic behaviors such as quantum
particles existing in super-positions of overlapping opposite states of spin in a
single moment. We also observe harmonized in ice-like "condensate" states, where a
bunch of disparate particles behave in a uniform super-fluid. Why should
matter be able to harmonize and be in super-positions of two states at
once? Why this is, or the cause of this phenomena, is the subject of
fascinating explorations and debate. We don’t precisely know yet. But
suffice it to say that we can now super-chill atoms to the point that
they start behaving in this exotic way in a way that is useful in a machine. We can now write and read using their hovering in this super-imposed state between two absolute
values. As long as we keep them in this low temperature state we can harness their fluctuating state as a computer logic gateway. This is where the concept of spin-up/spin-down/both-at-once
comes from in the 1/2 value mentioned above. By leveraging this "shimmer" between the spin states, we get our third state for the logic gateway that we need to create the quantum computer.<br /></p><p>Were temperature to be
slightly higher, altering and measuring the spin of a quantum bit would not be
possible. The more bits we add into the array, the more chaotically
the machine behaves, and the less utility we can derive from the system
from the perspective of a useful computer. Achieving a
reasonably robust array of quantum bits is incredibly challenging due to
the temperatures needed to keep the array stable. The Sycamore quantum computer used in the above
demonstration had only 54 bits. Lots of work goes into making these
sensitive machines. So we can somewhat infer that they are not going to
proliferate particularly rapidly in the commercial world due to economic
constraints. Yet in a few years there will likely be thousands of them.<wbr></wbr></p><p>I'll leave it to the scholars and inventors of our next generation to talk about the wonderful things we'll be able to do with this advanced computing power. It is good news that humans are now on the brink of this great new capability. My near-term focus, as with much of the software industry's security engineers in the coming years, will be to ensure our legacy computer industry will be able to <i>isolate</i> quantum computers from the rest of our legacy networks and software. The way we build secure networks today is by leveraging mathematically complex hashing of data that is transmitted over the internet. By sprinkling in "RSA" level encryption hashes (our legacy standard), you can feel reasonably secure in using your home internet connection to log into your bank account with the assurance that no mid-stream interference will make your account vulnerable. So long as the encryption hash you use can't be factored by a computer during the time that your login credential is in use, you're safe. That's where the concern arises for the hypothesized concept of a decryption attempt leveraging a quantum computer. Most hashes take mere years to calculate on an ordinary computer. So if the factoring capability were greatly enhanced, even the currently most-secure hashes could be vulnerable. </p><p>Most of us don't need to be too concerned about this. Only the companies we rely on to secure our communications and web services do. That's why our industry is transitioning to post-quantum cryptography over the next few years. I hope by now this all makes sense and you can understand why the US Department of Commerce is doing this. Next time you read of post-quantum (insert-noun-here) in the news, you'll be generally up to speed on what they are talking about. It is simply trying to save our past and present from vulnerabilities that could emerge in the future. Therefore it's referred to as "future-proofing" so that we can go on using our legacy know-how and open web protocols safely in the future. </p><p>We have seen ordinary citizens regularly be targeted by hackers no matter how obscure they may think they are. No certain person is likely to be the target of decryption attacks. If such attacks happen with a quantum computer, they will likely be focused on networks of information that are deemed valuable. The economics of cost implied by the complexity of quantum computers could lead us to conclude that in general they will be used to only solve very interesting opportunities and problems. But some assert that state secrets and financial institutions are so interesting that they will likely be the first vectors of exposure for the decryption experiments of state-sponsored actors who would gain from such access or information. Internet and software companies don't sit around thinking that their customers are too obscure to be interesting targets. So many of the companies we rely on daily will be implementing these new encryption standards when they are finalized. At some point you'll be asked to upgrade your software with a new set of tools and protocols that provide this new standard for security.</p><p><b>Entangled-particle cryptography</b> <br /></p><p>Post-quantum encryption that the NIST team at US Department of Commerce is
researching doesn't involve quantum field theory specifically, as it is a defense <i>against</i> quantum computers. One of the most exciting concepts I'd enjoyed reading about is quantum effects of <a href="https://en.wikipedia.org/wiki/Action_at_a_distance" target="_blank">nonlocality</a> and <a href="https://en.wikipedia.org/wiki/Quantum_entanglement" target="_blank">quantum entanglement</a>, which lead up to what we've been hearing about in the media as "<a href="https://en.wikipedia.org/wiki/Quantum_teleportation" target="_blank">quantum teleportation</a>." Brian Greene explained at length in his book, <a href="https://en.wikipedia.org/wiki/The_Elegant_Universe" target="_blank">The Elegant Universe</a>, how this idea of teleported information through the manipulation of particles, which were paired earlier in time, could create the potential for a <i>perfect</i> cryptography. <br /></p><p>The mechanism of using paired particles involves a process of first creating an entangled wave state between two particles, or splitting a single quantum wave in two, which can then separate spatially over time to transmit information when those particles are then read in remote locations. The concept stems from a prediction of Einstein, Podolsky and Rosen, abbreviated as the <a href="https://en.wikipedia.org/wiki/EPR_paradox" target="_blank">EPR Paradox</a>. This process of causing interaction between two particles, inherently connected while spatially separated, was what Einstein termed "freaky action at a distance." We are now relatively proficient at using beam splitters or other light-control means to cause the entangled wave states needed to do this kind of rendering, sending and capturing of particle signatures at short distances. But they can't be used in long-distance telephony at present because the wave state involved is too fragile. You'd typically need a vacuum to utilize the signals, which are not easy to create on the surface of Earth over long distances.<br /></p><p>The broad application of this kind of reliable stream of paired-particle plumbing for the purpose of messaging is many orders of magnitude more complex in scale than even the work involved in creating a quantum computer. So the practical application of the benefits is many decades further in the future than where we find ourselves today. So our best hope for a reliable encryption for our internet and software industries is to rely on means of encryption that are designed to be too-highly complex for a computer, of any type, to be able to decrypt. The methods to create this kind of incredibly strong encryption are themselves fascinating. So I'll get to those in a subsequent post.<br /></p><p>While it will be hard to determine when quantum computers will be turned toward exploiting vulnerabilities in legacy encryption, there are many who would agree that it's time to start battening down the hatches now. <br /></p>ncubeeighthttp://www.blogger.com/profile/07874174268038500927noreply@blogger.com0tag:blogger.com,1999:blog-4288746931605915378.post-70542845848543882052020-05-31T20:05:00.002-07:002020-06-13T20:21:10.539-07:00Money, friction and momentum on the webBack when I first moved to Silicon Valley in 1998, I tried to understand how capital markets made the valley such a unique place for inventors and entrepreneurs. Corporate stocks, real estate, international currency and commodities markets were concepts I was well familiar with from my time working at a financial news service in the nation's capital in the mid 1990's. However, crowdfunding and angel investing were new concepts to me 20 years ago. Crowdfunding platforms seemed to be more to the advantage of the funding recipient than the balanced two-sided exchanges of the commercial financial system. I often wondered what motivated generosity-driven models that was different from reward-driven sponsorships.<br />
<br />
When trying to grasp the way angel investors think about
entrepreneurship, my friend Willy, a serial entrepreneur and investor, said: “If you want to see something succeed, throw money at it!” The
idea behind the "angel" is that they are the riskiest of risk-capital. Angel investors join the startup funding before banks and venture capital firms. They seldom get payback in kind from the companies they sponsor and invest in. Angels are the lenders of first-resort for founders because they tend to be more generous, more flexible and more forgiving than lenders. They value the potential success of the venture far more than they value the money they put forth. And the contributions of an angel investor can have an outsized benefit in the early stage of an initiative by sustaining the founder/creator at their most vulnerable stage. But what is this essence they get out of it that is worth more than money to them?<br />
<br />
Over the course of the last couple of decades I've become a part of the crowdfunding throng of inventors and sponsors. I have contributed to small business projects on Kiva in over 30 countries, and backed many small-scale projects across Kickstarter, Indiegogo and Appbackr. I've also been on the receiving side, having the chance to pitch my company for funding on Sand Hill Road, the strip of financial lending firms that populate Palo Alto's hillsides. As a funder, it has been very enlightening to know that I can be part of someone else's project by chipping in time, sharing insights and capital to get improbable projects off the ground. And the excitement of following the path of the entrepreneurs has been the greatest reward. As a founder, I remember framing the potential of a future that, if funded, would yield significant returns to the lenders and shareholders. Of course, the majority of new ventures do not come to market in the form of their initial invention. Some of the projects I participated in have launched commercially and I've been able to benefit. (By getting shares in a growing venture or by getting nifty gadgets and software as part of the pre-release test audience.) But those things aren't the reward I was seeking when I signed up. It was the energy of participating in the innovation process and the excitement about making a difference. After many years of working in the corporate world, I became hooked on the idea of working with engineers and developers who are bringing about the next generation of our expressive/experiential platforms of the web.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi9eXFAHY7v0909Qi_8aRda4eQ6H6-zG6jOklLNtENKuYf3E_w7Efl5zMkHUvfiAL7VwusTltThaM3mrKDSRm3aD4IfSwTQl6vrCWm0103wdWqIkB1aPpT92QYAHJglR8NKQQjwFJswm_A/s1600/Money+as+Communication.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="842" data-original-width="1600" height="168" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi9eXFAHY7v0909Qi_8aRda4eQ6H6-zG6jOklLNtENKuYf3E_w7Efl5zMkHUvfiAL7VwusTltThaM3mrKDSRm3aD4IfSwTQl6vrCWm0103wdWqIkB1aPpT92QYAHJglR8NKQQjwFJswm_A/s320/Money+as+Communication.png" width="320" /></a></div>
During the Augmented World Expo in May, I attended a conference session called "<a href="https://www.awexr.com/usa-2020/agenda/1854-web-monetization-and-social-signaling-in-mozilla-h" target="_blank">Web Monetization and Social Signaling</a>," hosted by Anselm Hook, a researcher at the web development non-profit Mozilla, where I also work. He made an interesting assertion during his presentation, "Money appears to be a form of communication." His study was observing platform-integrated social signals (such as up-voting, re-tweeting and applauding with hand-clapping emojis) to draw attention to content users had discovered on the web, in this case within the content recommendation platform of the <a href="https://mixedreality.mozilla.org/firefox-reality/" target="_blank">Firefox Reality VR</a> web browser. There are multiple motivations and benefits for this kind of social signaling. It serves as a bookmarking method for the user, it increases the content's visibility to friends who might also like the content, it signals affinity with the content as part of one's own identity and it gives reinforcement to the content/comment provider. Anselm found in his research that participants actually reacted more strongly when they believed their action <i>contributed financial benefit</i> directly to the other participant. Meaning, we don't just want to use emojis to make each other feel good about their web artistry. In some cases, we want to cause profit for the artist/developer directly. Perhaps a gesture of a smiley-face or a thumb is adequate to assuage our desire to give big-ups to an artist, and we can feel like our karmic balance book is settled. But what if we want to do more than foist colored pixels on each other? Could the web do more to allow us to financially sustain the artist wizards behind the curtain? Can we "tip" the way we do our favorite street musicians? Not conveniently, because the systems we have now mostly rely on the credit card. But in the offline context, do we interrupt a street busker to ask for their Venmo or Paypal account? We typically use cash, which has only rough analogues as of yet in our digital lives.<br />
<br />
When I lived in Washington DC, I had the privilege to see the great Qawwali master <a href="https://en.wikipedia.org/wiki/Nusrat_Fateh_Ali_Khan" target="_blank">Nusrat Fateh Ali Khan</a> in concert. Qawwali is a style of inspired Sufi mystical chant combined with call-and-response singing with a backup ensemble. Listening for hours as his incantations built from quiet mutterings accompanied by harmonium and slow paced drums to a crescendo of shouts and wails of devotion at the culmination of his songs was very transporting in spite of my dissimilar cultural upbringing and language. What surprised me, beyond the amazing performance of course, was that as the concert progressed people in the audience would get up, dance and then hurl money at the stage. "This is supposed to be a devotional setting isn't it? Hurling cash at the musicians seems so profane," I thought. But apparently this is something that one does at concerts in Pakistan. The relinquishing of cash <i>is</i> devotional, like Thai Buddhists <a href="https://leapingaroundtheworld.wordpress.com/thailand/" target="_blank">offering gold leaf</a> by pressing it into the statues of their teachers and monks. Money is a form of communication of the ineffable appreciation we feel toward those of greatness in the moment of connection or the moment of realization of our indebtedness. Buying is a different form of expression that is personal but not expressive. When we buy, it is disconnected from artistry of the moment. No lesser appreciation for sure. It's different because it isn't social signaling, it's coveting. When in concerts or in real-time scenarios we transmit our bounty upon
another, it is an act of making sacrifice and conferring benefit. The underlying meaning of
it may be akin to "I hope you succeed!" or, "I relinquish my having so
that you might have." I'm glossing over the cultural complexity of the gesture surely. Japanese verbs have subtle ways to distinguish the transfer/receipt of benefit according to seniority, societal position and degree of humility: Giving upward "ageru/agemasu", giving downward "kudasai/kudasaru", giving laterally "kureru/morau" The psychological subtlety of the transfer of boons between individuals is scripted deeply within us, all the more accentuating how a plastic card or a piece of paper barely captures the breadth of expression we caring animals have.<br />
<br />
The web of yesteryear has done a really good job of covering the coveting use case. Well done web! Now, what do we build for an encore? How can we emulate the other expressions of human intent that coveting and credit cards don't cover?<br />
<br />
In the panic surrounding the current Covid pandemic, I felt a sense of being disconnected from the community I usually am rooted in. I sought information about those affected internationally in the countries I've visited and lived in, where my friends and favorite artists live. I sought out charitable organizations positioned there and gave them money, as it was the least I felt I could do to reach those impacted by the crisis remote from me. Locally, my network banded together to find ways that we could mobilize to help those affected in our community. We found that using the metaphor of "gift cards" (a paper coupon) could be used to foist cash quickly into the coffers of local businesses so they could meet short-term spending needs to keep their employees paid and their businesses operational even while their shops were forced into closure in the interest of posterity. I found the process very slow and cumbersome as I had to write checks, give out credit cards (to places I never would typically share sensitive financial data) find email addresses for people to transmit PayPal to, and in come cases I had to resort to paper cash for those whom the web could not reach.<br />
<br />
This experience made me keenly aware that the systems we have on the web don't replicate the ways we think and the ways that we express our generosity in the modern world. As web developers, we need to enable a new kind of gesture akin to the act of tipping with cash in offline society. Discussing this with my friend <a href="https://twitter.com/aneilbaboo" target="_blank">Aneil</a>, he asserted that both anonymous donor platforms like Patreon and other block-chain currencies can fit the bill for addressing the donor need, if the recipient is set up to receive them. He cautioned that online transactions are held to a different standard than cash in US society because of “Know Your Customer” regulation which was put in place to stem the risk of money laundering through anonymous transactions. As we discussed the idea of peer-to-peer transactions in virtual environments, he pointed out, ”The way game companies get around that is to have consumers purchase in game credits that cannot be converted back into money.” The government is fine with people putting money into something. It’s the extraction from the flow of exchange in monetary sense that needs to be subject to the regulations designed for taxation and investment controls.<br />
<br />
Patreon, like PayPal, is a cash-value paired system while virtual currencies such as Bitcoin, BAT and Etherium can be variable in exchange value for their coin. Blockchain ledger transactions trace exactly who gave what to whom. So, they are in theory able to comply with KYC restrictions even in situations where the exchange is relatively anonymous. Yet they are wildly different in terms of how the currency holders perceive their value. Aneil pointed out that Bitcoin is bad for online transactions because its scarcity model incentivizes people to hold onto it. It’s like gold, a slow currency. A valuable crypto currency therefore would slow down rather than facilitating donation and tipping. You need a currency that people are comfortable to hold for only short periods of time like the funds in a Kiva or Patreon wallet. If people are always withdrawing from the currency for fear of its losing value, then the currency itself isn’t stable enough to be the basis of a robust transaction system. For instance, when I was in Zimbabwe, where inflation is incredibly high for their paper currency, people wanted to get rid of it quickly for some other asset that lost value <i>slower</i> than the paper notes. Similarly, Aneil pointed out, any coin that you use to transact virtually could suffer the incentive to cash out quickly, which would drive the value of the asset in a fluid marketplace lower. Cash proxies don’t have an inherent value unless they are underpinned by an artificial or perceived scarcity mechanism. The US government has an agency, the Federal Reserve, whose mission it is to ensure that money depreciates slowly enough that the underlying credit of the government stays stable and encourages growth of its economy. Any other currency system would need the same. Bitcoin can't be it because of its exceedingly high scarcity which leads to hoarding. Until web developers solve this friction problem, web transactions and therefore web authorship will be stifled of support it needs to grow.<br />
<br />
Understanding this underlying problem of financial sustainability, my colleague Anselm is <a href="https://hacks.mozilla.org/2020/03/web-monetization-coil-and-firefox-reality/" target="_blank">working with crypto-currency enabler Coil</a> to try to apply cyrpto-currency sponsorship to peer and creator/recipient exchanges on the web. He envisions a future where users could casually exchange funds in a virtual, web-based or real-world "augmented reality" transaction without needing to exchange credit card or personal account data. This may sound mundane, because the use-case and need is obvious, as we're used to doing it with cash every day. The question it begs is, why can't the web do this?#! Why do I need to exchange credit cards (sensitive data) or email (not sensitive but not public) if I just want to send credits or tips to somebody? There was an early success in this kind of micropayments model when <a href="https://en.wikipedia.org/wiki/Anshe_Chung" target="_blank">Anshe Chung</a> became the world's first self-made millionaire by selling virtual goods to Second Life enthusiasts. The LindenLabs virtual platform had the ability for users to pay money to other peer users inside the virtual environment. With a bit more development collaboration, this kind of model may be beneficial to others outside of specific game environments.<br />
<br />
Anselm's speech at AWE was to introduce the concept of a "tip-jar," something we're familiar with from colloquial offline life, for the nascent developer ecosystem of virtual and augmented reality web developers. For most people who are used to high-end software being sold as apps in a marketplace like iTunes or Android Play Store, the idea that we would pay web pages may seem peculiar. But it's not too far a leap from how we generally spend our money in society. Leaving tips with cash is common practice for Americans. Even when service fees are not required, Americans tend to tip generously. Lonely Planet dedicates sections of its guidebooks to concepts of money and I've typically seen that Americans have a looser idea of tip amount than other countries.<br />
<br />
Anselm and the team managing the "<a href="https://www.grantfortheweb.org/" target="_blank">Grant for the Web</a>" hope to bring this kind of peer-to-peer mechanism to the broader web around us by utilizing Coil's grant of $100 Million in crypto-currency toward achieving this vision. <br />
<br />
If you're interested in learning more about web-monetization initiative from Coil and Mozilla please visit: https://www.grantfortheweb.org/<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />ncubeeighthttp://www.blogger.com/profile/07874174268038500927noreply@blogger.com0tag:blogger.com,1999:blog-4288746931605915378.post-40422479927686570702020-01-01T08:55:00.163-08:002021-08-15T12:27:54.027-07:00The Momentum of Openness - My Journey From Netscape User to Mozillian Contributor<p><br /></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7S4HSwvuka9nb_83smb-kaPfaQjSb600AaqgCPdn0tpUEGESLZTfzgD0MKEwHW6HjJwbRVOxJ6A6eakDeqAhXHlZHT31jTChBIXAEVb5DWEXV7L23A7_SfAiiWNemPC70RAUfJ8qwDr8/s1407/Momentum+Cover.png" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="1407" data-original-width="1355" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7S4HSwvuka9nb_83smb-kaPfaQjSb600AaqgCPdn0tpUEGESLZTfzgD0MKEwHW6HjJwbRVOxJ6A6eakDeqAhXHlZHT31jTChBIXAEVb5DWEXV7L23A7_SfAiiWNemPC70RAUfJ8qwDr8/w310-h320/Momentum+Cover.png" width="310" /></a></div><p></p><p>(Update: Because this post is exceedingly long, I have decided to make it available as a printed book: <a href="https://www.amazon.com/gp/product/B08GKYSGLG" target="_blank">Momentum of Openness</a> It will remain free to read here.) Insider story behind the cover image: Mozilla's mascot derived from the name of the Mosaic browser and the trademarked name of a large mythical beast from Japanese culture which would rise from the oceans to protect mankind against peril. You may see this mythical creature in Bugzilla, or featured in popular web browsers like Chrome when they are having issues addressing your requests. I like to call it "The Mozilla" because it serves as a protector of all that's good. When I first came to the headquarters of Mozilla, I had to get a picture being bitten by the Mozilla. You'll understand why we feel so affectionately about this symbolic icon as you read the story of my journey to web development below.</p><p><b>Foreword</b></p><p></p><p></p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiuaprZP8iJK2s6PnXe8nHFXhpoh1mTXE2HeE_JZYzRjRIVddhFU8uhERJrT0qMyK5Cv2oI0mxyQ_qGrHeKscfGiq0QHtzBrdfXYjp1HnAm-kLSTUnFlOpKIDkg0mptKb75jOzjGdS09WE/s600/Mozillasaur.jpg" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="600" data-original-width="400" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiuaprZP8iJK2s6PnXe8nHFXhpoh1mTXE2HeE_JZYzRjRIVddhFU8uhERJrT0qMyK5Cv2oI0mxyQ_qGrHeKscfGiq0QHtzBrdfXYjp1HnAm-kLSTUnFlOpKIDkg0mptKb75jOzjGdS09WE/w266-h400/Mozillasaur.jpg" width="266" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><b><span style="font-size: xx-small;">Shepard Fairey's Dino</span></b><br /></td></tr></tbody></table>Working at Mozilla has been a very educational experience over the
past eight years. I have had the chance to work side-by-side with many
engineers at a large non-profit whose business and ethics are guided by a
broad vision to protect the health of the web ecosystem. How did I go
from being on the front of a computer screen in 1995 to being behind the
workings of the web now? Below is my story of how my path wended from
being a Netscape user to working at Mozilla, the heir to the Netscape
legacy. It's amazing to think that a product I used 25 years ago ended
up altering the course of my life so dramatically thereafter. But the
world and the web was much different back then. And it was the course
of thousands of people with similar stories, coming together for a cause
they believed in.<br />
<br />
<b>The Winding Way West</b><br />
<br />
Like
many people my age, I followed the emergence of the World Wide Web in
the 1990’s with great fascination. My father was an engineer at
International Business Machines when the Personal Computer movement was
just getting started. His advice to me during college was to focus on
the things you don't know or understand rather than the wagon-wheel ruts
of the well trodden path. He suggested I study many things, not just the
things I felt most comfortable pursuing. He said, "You go to college so
that you have interesting things to think about when you're waiting at
the bus stop." He never made an effort to steer me in the direction of
engineering. In 1989 he bought me a Macintosh personal computer and
said, "Pay attention to this hypertext trend. Networked documents is
becoming an important new innovation." This was long before the World
Wide Web became popular in the societal zeitgeist. His advice was
prophetic for me.<br />
<br />
After graduation, I moved to Washington, DC to
work for a financial news wire that covered international business, US
economy, <a href="https://en.wikipedia.org/wiki/World_Trade_Organization">World Trade Organization</a>, <a href="https://en.wikipedia.org/wiki/Group_of_Seven">G7,</a> <a href="https://en.wikipedia.org/wiki/Office_of_the_United_States_Trade_Representative">US Trade Representative</a>, the <a href="https://en.wikipedia.org/wiki/Federal_Reserve">Federal Reserve</a>
and breaking news that happened in the US capital. This era stoked my
interest in business, international trade and economics. During my
research (at the time, via a <a href="https://en.wikipedia.org/wiki/Netscape">Netscape</a> browser, using <a href="https://en.wikipedia.org/wiki/AltaVista">AltaVista</a> search engine) I found that I could locate much of what I needed on the web rather than in the paid <a href="https://en.m.wikipedia.org/wiki/LexisNexis">LexisNexis</a> database, which I also had access to at the National Press Building. <br />
<br />
When the Department of Justice initiated its <a href="https://en.wikipedia.org/wiki/United_States_v._Microsoft_Corp.">anti-trust investigation into Microsoft,</a> for what was called anti-competitive practices against <a href="https://en.wikipedia.org/wiki/Netscape">Netscape</a>,
my interest was piqued. Philosophically, I didn’t particularly see
what was wrong with Microsoft standing up a competing browser to
Netscape. Isn’t it good for the economy for there to be many competing
programs for people to use on their PCs? After all, from my
perspective, it seemed that Netscape had been the monopoly of the
browser space at the time.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://grainger.illinois.edu/alumni/hall-of-fame/marc-andreessen" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;" target="_blank"><img border="0" data-original-height="269" data-original-width="200" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjtxM41_4J5ZUYSmy6n7QvamxUZbdGf9gD0_1C6JQP_64ZCfn0I_YMIyXE25rAWZ8NcMR2A8zXS4oGJXiKmzTNDEArYYCY9gENkEdYgkFRcoc9qqX2yZa97AO_bVxDyZIUNyWVKucyRaGs/w238-h320/Marc+Andreessen.jpg" width="238" /></a></div>Following this case was my first
exposure to the ethical philosophy of the web developer community.
During the testimony, I learned how <a href="https://en.wikipedia.org/wiki/Marc_Andreessen">Marc Andressen, </a>and his team of software developer pioneers, had an idea that access to the internet (like the underlying <a href="https://en.wikipedia.org/wiki/Internet_protocol_suite">TCP/IP</a>
protocol) should not be centralized, or controlled by one company,
government or interest group. And the mission behind Mosaic and
Netscape browsers had been to ensure that the web could be device and
operating system agnostic as well. This meant that you didn’t need to
have a Windows PC or Macintosh to access it. <br />
<br />
It was fascinating to me that there were people acting like <a href="https://en.m.wikipedia.org/wiki/Jiminy_Cricket">Jiminy Cricket</a>,
Pinocchio's conscience, overseeing the future openness of this nascent
developer environment. Little did I know then that I myself was being
drawn into this cause. What I took away from the DOJ/Microsoft consent decree was
the concept that our government wants to see our economy remain <i>inefficient</i>
in the interest of spurring diversity of competitive economic
opportunity. Many companies doing the same thing, which seemed like a waste to me, would spur
a plurality of innovations that would improve with each iteration. Then
when these iterations compete in the open marketplace, they drive consumer choice and pricing
competition which by a natural process would lower prices for the average American consumer.
In the view of the US
government, monopolies limit this choice, keep consumer prices higher,
and stifle entrepreneurial innovation. US fiscal and trade policy was therefore
geared toward the concept of creating greater open access to
world markets in an effort to increase global quality of life through "spending power" for
individuals in participating economies it traded with. Inflation control and cross-border currency
stability is another interesting component of this, which I'll save for a future blog post.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://www.britannica.com/biography/Alan-Greenspan" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;" target="_blank"><img border="0" data-original-height="366" data-original-width="550" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgmCZzHlGq8sOTI1df7r6zoD6LwFweRc4P4Lwxn-DB1NOMVWZIlv1RUSQcMklOwQ-B58kO1sIkgOkeEAhWOffOVUAJ0SGefNVv7D1h1P6bUNsxurmN1Oj3B6Ro6Mf5dCGVg5FUkprKlYzM/s320/Alan-Greenspan-Presidential-Medal-of-Freedom-George-2005.jpg" width="320" /></a></div>The next wave of influence in my journey to web development came from
the testimony of the chairman of the Federal Reserve Bank. The Federal Reserve is the US Central Bank.
In the press it is typically just called "The Fed." It is a non-partisan agency that is in charge of
managing money supply and inter-bank lending rates which influence the flow of currency in the US economy.
They would regularly meet at the G7 conference in Washington DC with the heads of major influential countries to discuss their interest rates and fiscal policies. At the time, the
Fed Chairman was <a href="https://en.wikipedia.org/wiki/Alan_Greenspan">Alan Greenspan</a>.
Two major issues were top of the testimony agenda during his
congressional appearances in the late 1990’s. First, the trade
imbalances between the US (a major international importer) and the
countries of Asia and South America (which were major exporters) who
were seeking to balance out their trade deficits via the WTO and
regional trade pacts. In Mr. Greenspan’s testimonies, Congressional
representatives would repeatedly ask whether the internet would change
this trade imbalance as more of the services sector moved online.<br /><br />
As someone who used a <a href="https://en.wikipedia.org/wiki/Dial-up_Internet_access">dial-up modem</a> to connect to the internet at home (<a href="https://en.wikipedia.org/wiki/Digital_subscriber_line">DSL</a>
and cable/dish internet were not yet common at the time) I had a hard
time seeing how web services could offset a multi-billion dollar asymmetry
between US and its trading partners. But at one of Mr. Greenspan’s
sessions with <a href="https://en.wikipedia.org/wiki/Barney_Frank">Barney Frank </a>(One of the legislators behind the "<a href="https://en.wikipedia.org/wiki/DoddFrank_Wall_Street_Reform_and_Consumer_Protection_Act">Dodd-Frank</a>"
financial reform bill which passed post-financial crisis) asked Mr.
Greenspan to talk about the impact of electronic commerce on the US
economy. Mr. Greenspan, always wont to avoid stoking market
speculation, dodged the question saying that the Fed couldn’t forecast
what the removal of warehousing cost could do in impacting market
efficiency, therefore markets at large. This speech stuck with me. At the time they were discussing Amazon, a book seller which could avoid typical overhead of a traditional retailer by eliminating brick and mortar storefronts with their inventory stocking burdens. Bookstores allocated shelving space in retail locations for products consumers might never buy. A small volume of sales by inventory therefore would cover the real estate cost of the bulk of items which would seldom be purchased. Amazon was able to source the books at the moment the consumer decided to purchase, which eliminated the warehousing and shelf-space cost, therefore yielding cost savings in the supply chain.<br />
<br />
It was at this time that my
company, Jiji Press, decided to transition its service to a web-based news portal as
well. I worked with our New York bureau team during the process of our
network conversion from the traditional telephony terminals we used to new DSL based networks.
Because I'm a naturally-inclined geek, I asked lots of questions
about how this worked and why it worked better than our terminal style business
(which was similar to a Japanese version of Reuters, Associated Press and Bloomberg terminals)
<br /><br />
This era came to be called the "dotcom boom" during which every company in the US
started to establish a web presence through the launching of web services with a ".com"
top level domain name. A highly speculative stock market surge started around businesses that
seemed poised to capitalize on this rush to convert to web-based services.
Most companies seeking to conduct initial public offerings in this time
listed on the Nasdaq exchange, which seemed to have a highly volatile upward trajectory.
In this phase, Mr. Greenspan cautioned against "irrational
exuberance" as stock market valuations of internet companies were
soaring to dizzying proportions relative to the value of their
projected sales. I decided that I wanted to move to Silicon Valley to enter the fray myself.
It wasn't just the hype of the digital transformation or the stock valuations that enticed me.
I had studied the trade pacts of the United States Trade Representative vis a vis the negotiators of the Japanese government.
I had a strong respect for the ways that the two governments negotiated
to reduce tariffs between the two countries for the benefit of their citizens.
I knew that it was unlikely that I could pursue a career path to become a government negotiator,
so I decided that my contribution would be in conducting international market launches and
business development for internet companies between the two countries. <br /><br />
After a stint working in web development on websites with a small design agency, I
found my opportunity to pitch a Japanese market launch for a leading
search engine called LookSmart, which was building distributed search engines to power other portals.
Distributed search was an enterprise business model, called business to business (B2B), whereby we
provided infrastructure support for other portals that had their own direct audiences.<br />
<br />
<div class="separator" style="clear: both; text-align: left;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgIerMPgnzrbL4nowHDBe4Qni0bi4XCWaAIog1Vyz_SVm65IwT4oIAEcn2bjtVJE0FpopuiXhzCE_-pyDnyfemvYkt4XQ8erIcG41sMzIIA9z6IbxYeaJ9AGDwf1d_SWPXkWAdpFkru4xo/s1983/IsizeTravel.png" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="First launch of LookSmart Japanese Search on Isize" border="0" data-original-height="1983" data-original-width="1412" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgIerMPgnzrbL4nowHDBe4Qni0bi4XCWaAIog1Vyz_SVm65IwT4oIAEcn2bjtVJE0FpopuiXhzCE_-pyDnyfemvYkt4XQ8erIcG41sMzIIA9z6IbxYeaJ9AGDwf1d_SWPXkWAdpFkru4xo/w229-h320/IsizeTravel.png" title="First launch of LookSmart Japan on Isize" width="229" /></a>I recruited a team to build the initial Japanese search engine index. This was a curated search directory, a fixed database of authoritative websites on given topics, which we could query with boolean search parameters and integrate into client websites. After my team completed the first build of the database, we demonstrated it to Microsoft, which in turn licensed our search engine to power their MSN branded portals. On this news we listed our company on the Nasdaq stock market and planned a global expansion. We thereafter formed a joint venture with British Telecommunications, called BT LookSmart, to expand the LookSmart search distribution world-wide. I relocated to Sydney as part of the new JV team to build the hosted-search front end pages for our network partners across Asia Pacific region. (With support of a developer team spanning across Australia, Israel and Norway.)
Upon our first site launch, I moved to Tokyo to incorporate LookSmart Japan, hire a local team, and start building a local presence for the company. I turned my focus from product development to prospecting and contracting with local relying business partners, mainly portals.Recruit's Isize portal was the first to partner followed swiftly by other major ISP portal companies (Japan Telecom's ODN, NTT's OCN, KDDI's DION, NEC's Biglobe and Excite Japan) We complemented our offering with an advertising and paid-search indexing service that allowed marketers to promote their sites within our index. Through our revenue sharing partnerships we would return revenues from this to the portals and ISPs which had partnered with us. Over a years-long process of building customized search portals, and conducting gradual business development expansion of our search content and capabilities, we represented with a broad base of the Japanese internet market. As our back-end servers then processed a significant volume of the total Japanese search query volume, advertisers sought to have their sites indexed in sponsored listings at an ever greater pace. <br /></div><div class="separator" style="clear: both; text-align: left;"> </div><div class="separator" style="clear: both; text-align: left;">After we were representing a majority of local ISP search volume, Yahoo
Japan took
interest in acquiring the company. I moved back to the US to work
with Yahoo headquarters to distribute search services to other countries
across
Asia Pacific. By this time, Google, which had been an algorithmic search
provider to yahoo.com decided to break off from Yahoo to launched their
own portal service. In this transition, Yahoo licensed its patents to
Google so that they could launch a competing paid search offering.
Google in turn launched in Japan and began bidding against my former
partnerships from the Looksmart Japan network. I thought it was
exceedingly generous for Yahoo to empower their own competitors in the
interest of a diverse open market ecosystem! While I felt some sense of
regret that partnerships I'd worked years to establish were now in
contention with new market entrants, I came to understand the wisdom of
their decision. And in an ironic twist, Yahoo asked me to go on to pursue competitive bids to <i>counter</i> Google's future offers to our partners thereafter.
<br /></div><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgF6ogG4kH8rhZxSUeEnMWp21GTgPcWQmoamtb44zFEWyq5T-P8KtcRpvfswFFlNoKVU7xxujgMojS5uZQDTIl7Df5XTG6xQ7Qu0UXwGODJ9tvzUyaZ4DaEV8GOVAknhHI1Zj7wZtVU7Zw/s780/YahooSponsoredSearch.png" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="780" data-original-width="494" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgF6ogG4kH8rhZxSUeEnMWp21GTgPcWQmoamtb44zFEWyq5T-P8KtcRpvfswFFlNoKVU7xxujgMojS5uZQDTIl7Df5XTG6xQ7Qu0UXwGODJ9tvzUyaZ4DaEV8GOVAknhHI1Zj7wZtVU7Zw/w203-h320/YahooSponsoredSearch.png" width="203" /></a></div><p>Yahoo sent me back to Japan to develop and pitch new enhanced mobile search services that I hadn’t been able to offer from the LookSmart toolkit. (British Telecommunications did have a mobile app/portal in Japan which I supported as part of BT LookSmart, but our index was made up of traditional PC-focused web content that rendered poorly on small screens.) With Yahoo’s Tokyo-based team, we developed a mobile-site search index that excluded PC-focused content. We built a special mobile-centric advertising platform based on Yahoo’s patented auction-placement advertising platform. We then had a search indexing capability to uniquely query content that was specially designed for the three major cell network companies across Japan at the time. (Japan Telecom, KDDI, and NTT Docomo, another Softbank company.) We were announced in the Nikkei Business Publications' annual almanac for that year as one of the internet market's leading innovations. </p><p>When I'd go to the prospecting meetings on behalf of Yahoo, I'd hear comments in Japanese in the room, "Hey, isn't that the Looksmart guy?" to which I'd humbly ask their ongoing support and consideration. When people asked me to contrast our technologies against Google's I was able to speak from a great deal of experience with them. After all they'd been a partner of Yahoo's and an industry collaborator, even if they were owned by different shareholders. We all had built our distinct offerings on similar techniques of web crawling and algorithmic filtering. And our products leveraged and were protected by the same patents.</p><p></p><p>Over the course of the ensuing decade, my team continued to expand our licensing and search infrastructure partnerships more broadly. I conducted the search-infrastructure partnerships in Korea, Taiwan, Hong Kong, India, Brazil and Australia with local Yahoo teams. As part of its global expansion in search technology, Yahoo decided to acquire AltaVista, Inktomi, Overture and Fast Search. Eventually, Microsoft decided to give up outsourcing search services and launched their own search engine, branded as Bing, bringing even more competition into the space formerly dominated by Yahoo and Google. Thereafter dozens of subject-specific search providers sprung up to fill niche opportunities in shopping, food and site-specific indexing.</p><p><b>Meanwhile at Netscape</b><br /></p><p></p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://en.wikipedia.org/wiki/Mosaic_(web_browser)" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="585" data-original-width="630" height="298" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgV76z3tRoEwKf8jmcCqb1SLjtjdY2Fp33UbwUuXG62YmUGrIBV-FIaAPjxQOXFmm5f5vcKA6Q_f4R7RepGSXtpDcHAeeBZwH5UDgcWqy_dMllfB2xb1mLZLoEYRYVQCBHD9HdWH_9ZSrE/w320-h298/Mosaic_Netscape_0.9_on_Windows_XP.png" width="320" /></a></div>In parallel to my journey, Netscape had followed a somewhat bumpy trajectory. As I'd mentioned in the first chapter, Microsoft had decided to launch a competitor for their Netscape browser Andreessen's team had developed out of Mosaic. While Netscape had fared very well after their successful Nasdaq public offering, the increasing competition from the launch of Microsoft's Internet Explorer pushed the active user base of Netscape Communicator products out of the majority share. The share price of Netscape had fallen to the point that the executive team started to look for strategic options for a new home for the product. Ultimately they received an acquisition bid by American Online, referred to commonly as AOL. AOL had started as a massively popular dial-up
modem service in the United States before the rise of DSL and cable broadband internet access.<p></p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://www.ucreative.com/articles/15-web-apps-for-web-designers/" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;" target="_blank"><img border="0" data-original-height="396" data-original-width="548" height="230" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgNmABWpU7wNAdAq7pgZn2pONBucZuGjSBAh_gXdkl2QhVnCf0GawM5YwBZXTCvbJ4O11okavyIDp_sjiiCLwBsOq9xUg_zvFmOjJ5KhEy4HXLwK9odHdGQwlArLMB5qGfXCj2Z9VchO0w/w320-h230/AOL-Browser.jpg" width="320" /></a></div>AOL had a browser of their own
too. But it was considered by many to be a <a href="https://en.wikipedia.org/wiki/Closed_platform">walled-garden</a>
browser that was tied to the portal service that AOL also owned, which provided their customers with what industry folk referred to as their "daily clicks" like news,
weather and email. Portals such as Yahoo and AOL sought to promote certain preferred content for new web users to ensure a positive experience of trusted or licensed content. At the same time, they wanted to protect their users from
the untamed territory of the world wide web of the 1990's, which they felt
was risky for the untrained user to venture into. (This was a
time of a lot of Windows viruses, pop-ups, scams, and few user
protections.) AOL's stock had done really well based on their success in
internet connectivity services, content aggregation and advertising platform. So they asserted that they could put the necessary resources into growing Netscape back to its former prominence.<p></p><p>The
team at Netscape may have been disappointed that their world-pioneering
web browser was being a acquired by a company that had a limited view
of the internet, even if AOL had been
pioneers in connecting the unconnected. It was probably a time of soul
searching for Marc Andreessen's supporters, considering that
the idea of Netscape had been one of decentralization, not corporate
mergers. A group of innovators inside AOL suggested that the
threat of a world dominated by Microsoft's IE browser was a risky future
for the world of open competitive ecosystem of web developers. <br />
</p><div class="separator" style="clear: both; text-align: left;"><a href="https://bugzilla.mozilla.org/home" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;" target="_blank"><img border="0" data-original-height="920" data-original-width="1180" height="250" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh54s6PO9xQwgZrZh7wU22RQ1Nor-i5ONlWXbkh1jlf9R00P9fd4AveSn0wf9zHMa5ijOIzTOyewrUZ0Sx7JWSsp0NKmrj450WZA0-9z-DXvkuXZEk5dAnA-KPLAcd6779kBNoXrosvguA/w320-h250/BugzillaSampleFile.png" width="320" /></a> A small group of engineers inside the company they
persuaded the AOL executives to set up a skunk-works team inside
AOL to open source the Netscape Communicator product. They achieved this by dividing the product into component
parts that could then be uploaded into a modular hierarchical bug triage
tree, which they referred to as <a href="https://bugzilla.mozilla.org/">Bugzilla</a>. By doing this, they theorized that people outside of AOL could help fix code problems that were cumbersome for internal AOL teams to solve. By allowing contributing developers to "fork" (meaning create derivative products) from the open source code base, they would further incentivize innovation, as those developers could compete based on features they introduced to their derivative products. Because the concept was created with a sense of generosity, they believed that most of the innovations would be shared back to AOL. Succeeding with a small fork is hard. But introducing software patches to the complete global audience of AOL/Netscape would benefit all other developers and users in turn. Of some surprise to me, they didn't make a requirement that forks thereafter be maintained as open source. So some of the employees of Mozilla would thereafter leave the company and launch their own browsers to compete directly. This reminded me of Yahoo's previous act of licensing out its innovations to competitors in the interest of a healthy competitive developer ecosystem. <br /></div><div class="separator" style="clear: both; text-align: left;"> </div><div class="separator" style="clear: both; text-align: left;">(If you are interested in the specific history of this open sourcing initiative, please consider seeking out the documentary
about this phase in AOL's history called "<a href="https://www.imdb.com/title/tt0499004">Code Rush</a>.")<br />
</div><p></p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi7Q0PHN_pK7Ggwi9xw9y9tkY71NuMhEUD_JgqtY9Y-DbN4W6pyNQ9gKDG0C_kONTEpr2pyl0NoGESyiGDVp2dBZVyJM5EMbwn6qKm0ge9reShhY1RH49HO-ZGsZDsoYK7uc3x-_q4NT24/s875/Mitchell.jpg" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="875" data-original-width="818" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi7Q0PHN_pK7Ggwi9xw9y9tkY71NuMhEUD_JgqtY9Y-DbN4W6pyNQ9gKDG0C_kONTEpr2pyl0NoGESyiGDVp2dBZVyJM5EMbwn6qKm0ge9reShhY1RH49HO-ZGsZDsoYK7uc3x-_q4NT24/w299-h320/Mitchell.jpg" width="299" /></a></div><p>The Mozilla Project grew inside AOL for a long while beside the AOL browser and
Netscape browsers. But at some point the executive team believed that
this needed to be streamlined. <a href="https://en.wikipedia.org/wiki/Mitchell_Baker">Mitchell Baker</a>, an AOL attorney, <a href="https://en.wikipedia.org/wiki/Brendan_Eich">Brendan Eich</a>, the inventor of JavaScript, and an influential venture capitalist named <a href="https://en.wikipedia.org/wiki/Mitch_Kapor">Mitch Kapoor</a>
came up with a suggestion that the Mozilla Project should be spun out
of AOL. Doing this would allow all of the enterprises who had interest
in working in open source versions of the project to foster the effort
while Netscape/AOL product team could continue to rely on any code
innovations for their own software within the corporation.<br />
<br /></p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFbY2CoFd4CAh-3bchh2MxaJk6WSRhONKZ9fuf9tfPnRMkoVSEyaI-UwNvtVq8GAKRCObcBYEk-Rp6wb0NVbtGz-cVQ1rEPsIeZeS4UoXW3UZJo2uzNhCmeY4BDimjfBaaRU-UY9lehqg/s1206/Brendan.png" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="1060" data-original-width="1206" height="282" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFbY2CoFd4CAh-3bchh2MxaJk6WSRhONKZ9fuf9tfPnRMkoVSEyaI-UwNvtVq8GAKRCObcBYEk-Rp6wb0NVbtGz-cVQ1rEPsIeZeS4UoXW3UZJo2uzNhCmeY4BDimjfBaaRU-UY9lehqg/w320-h282/Brendan.png" width="320" /></a></div><p>A
Mozilla in the wild would need resources if it were to survive. First,
it would need to have all the patents that were in the Netscape portfolio to avoid hostile legal challenges from outside. Second, there
would need to be a cash injection to keep the lights on as Mozilla
tried to come up with the basis for its business operations. Third, it
would need protection from take-over bids that might come from AOL
competitors. To achieve this, they decided Mozilla should be a
non-profit foundation with the patent grants and trademark grants from
AOL. Engineers who wanted to continue to foster AOL/Netscape vision of
an open web browser specifically for the developer ecosystem could
transfer to working for Mozilla. (As they announced in their early blog post/announcement: https://blog.mozilla.org/press/2003/07/mozilla-org-announces-launch-of-the-mozilla-foundation-to-lead-open-source-browser-efforts/)<br />
<br />
Netscape had created a crowd-sourced web index (called <a href="https://en.wikipedia.org/wiki/DMOZ">DMOZ</a> or open directory) which had hard-coded links to most of the top websites of the time, aggregated by subject matter specialists who curated the directory in a fashion similar to Wikipedia of today. DMOZ went on to be the seed for the <a href="https://en.wikipedia.org/wiki/PageRank">PageRank</a>
index of Google when Google decided to split away from powering the
Yahoo search engine. It's
interesting to note that AOL played a major role in helping Google
become an independent success as well, which is documented in the
book <a href="https://www.amazon.com/Search-Rewrote-Business-Transformed-Culture/dp/1591840880/">The Search</a> by John Battelle.<br />
<br />
Once
the Mozilla Foundation was established (along with a $2 Million grant
from AOL) they sought donations from other corporations who were to
become active on the Mozilla project. The team split out Netscape
Communicator's email component as the Thunderbird email application as a
stand-alone open source product. The browser, initially called Phoenix, was released to
the public as "Firefox" because of a trademark issue with another US
company over the usage of the term "Phoenix" in association with software.<br />
<br />
Google, freshly independent from Yahoo, offered to
pay Mozilla Foundation for search traffic that they could route to their
search engine. Taking revenue share from advertising was
not something that the non-profit foundation was particularly
well set up to do. So they needed to structure a corporation that could
ingest these revenues and re-invest them into a conventional software
business that could operate under the contractual structures of
partnerships with public companies. The Mozilla Corporation could
function much like any typical California company with business
partnerships without requiring its partners to structure their payments
as grants to a non-profit.<br />
<br />
When Firefox version 1.0 was launched, it rapidly spread in popularity. They did clever
things to differentiate their browser from what people were used to in
the Internet Explorer experience such as letting their users block
pop-up banners or customize their browser with extensions. The largest turning point for Firefox popularity came at a time when there was a vulnerability discovered
in IE6 that allowed malicious actors to take-over the user's browser for
surveillance or hacking in certain contexts. (The vulnerability involved a component called <a href="https://us-cert.cisa.gov/ncas/archives/alerts/SA06-258A">ActiveX</a>.) The US government urged
companies to stop using IE6 and to update to a more modern browser. It
was at this time I remember our IT department at Yahoo telling all its
employees to switch to Firefox. I remember discussing this with my IT team and engineers who ensured me that Firefox plus a Yahoo toolbar was just like the Yahoo browser itself. With this transition Yahoo could give up the burden of keeping their browser up to date, as Mozilla would do all that work and update all users for free on a regular release cadence. </p><p>This word-of-mouth promotion happened across the industry and employees would tell friends and families to switch browsers, or customize their browsers the way they did themselves. You suddenly could get toolbars for any site you wanted that could add bookmarks, design themes and search preferences to the Firefox browser. Mozilla seemed to be doing a lot of work to keep the underlying browser updated. Yet this was a synergistic relationship because all the parties who relied on it would promote Firefox with the might of their own marketing channels and web links that promoted their own browser extensions. It was a perfect symbiotic relationship between otherwise unrelated companies because they were working off of a piece of software that was open source. They could have removed the Firefox brand from the open source browser if they wanted to, and many companies did launch forked browsers replacing the Firefox brand with their own brand. But many liked the brand-trust that Firefox itself had. So they promoted "add to Firefox" instead of trying to replace the user's existing browser entirely.<br />
<br />
Because Mozilla was a non-profit, as it grew it had
to reinvest all proceeds from their growing revenues back into web
development and new features. (Non-profits can't "cash out" or pay dividends to their shareholders.) So they began to expand outside the core
focus of JavaScript and browser engines alone. Several Mozillians departed to work on alternative open source browsers. The ecosystem grew suddenly with Apple, Google launching
their own open source alternative browser engines on a similar open source model. As these varied browsers grew, the companies
collaborated on standards that all their software would use to ensure
that web developers didn't have to customize their websites to uniquely
address idiosyncrasies of each browser. To this day browsers collaborate to maintain "Web compatibility" across all the different browsers and the extensions model of each so that developers can bring additional product features to the different browsers without having to be built into the browsers themselves.<br />
<br />
When I joined Mozilla, there were three major issues that
were seen as potential threats to the future of the open web ecosystem.
1) The "app-ification" of the web that was coming about in new smart-phones
and how they encapsulated parts of the web, 2) The proliferation of
dynamic web content that was locked in behind fragmented social
publishing environments, 3) The proliferation of identity management
systems using social logins that were cumbersome for web developers to
utilize. Mozilla, like a kind of vigilante super hero, tried to create
innovative tactics to propose solutions to address each one of these.
It reminded me of the verve of the early Netscape pioneers who tried to
organize an industry toward the betterment of the entire
ecosystem. To discuss these different threads, it may
be helpful to look at what had been transforming the web in years
immediately prior. <br />
<br />
<b>What the Phone Did to the Web and What the Web Did Back</b><br />
<br />
The
web is generally based on html, CSS and JavaScript. A web developer
would publish a web page once, and those three components would render
the content of the webpage to any device with a web browser. What we
were going into in 2008 was an expansion of content publication
technologies, page rendering capabilities and even devices which were
making new demands of the web. It was obvious to us at Yahoo at the
time that the industry was going through a major phase shift. We were
building our web services on mashups of content sources. The past idea of the web was based on static
webpages that were consistent to all viewers globally. What we were going toward
was a sporadically-assembled web tailored to each user individually. The new style of page assembly was marketed as "web 2.0" was frequently called "mash-up" or
"re-mix" using multi-paned web pages that would assemble several
discreet sources of content at the time of page load. </p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://en.wikipedia.org/wiki/Yahoo!_Mail" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;" target="_blank"><img border="0" data-original-height="237" data-original-width="420" height="181" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhw5cS-4dKRdDfrvDMKDV5uomr6DYAA4EVnVztupVDJ182y9VfWrb6AA4FCEgm2mmXpTJkguxXjMIyik5hd-4XGwUeL_Wn-7y3N9G_jJ7FJLmObcTWBFZgeEXsO8MTt3cbFAbP5Z4mkE64/w320-h181/Yahoo_Mail_desktop.png" width="320" /></a></div>We call this
AJAX for "asynchronous JavaScript and xml" (xml=extensible markup language)
that allowed personalized web content to be rendered on demand to each user. This kind of processing is referred to as "client side" referring to the idea that your computer does the assembling of sources on your machine locally instead of just looking at a page that is entirely rendered on a web server. This is important not only because it off-loads burden on the web server, lowering cost, but it also gives the user added privacy and security protections as only they can see the unique assembly of rendered content. Web
pages of this era appeared like dashboards and would be constantly
refreshing elements of the page as the user navigated within panes of
the site.<br />
<br />In the midst of this shift to the spontaneously-assembled dynamic web, Apple launched the iPhone. What ensued
immediately thereafter was a kind of developer confusion as Apple
marketed the concept that every developer wishing to be
shown on its phones needed to customize content offerings as an app
tailored to the environment of the phone. Apple didn't remove their Safari browser, but they diverted attention to stand-alone app frameworks discovered through the iTunes App Store. The “tunes” part of the iPhone was because it was a derivative product of the earlier-launched MP3 player by Apple called iPod, so everything on the device including software and "Podcasts" had to be synced with a PC through the music player. This was a staging strategy they replaced once the iPhone became a leading brand for the business. Much of this has changed in Apple's new architecture. Nowadays, the app marketplace, music app and podcasts app are all stand-alone products in their own right and AppleID has been "<a href="https://ncubeeight.blogspot.com/2013/08/">set free</a>" to have other purposes specific to account management and wallet-specific uses.<br /><p></p><p></p><p></p><p>It seemed strange that users would no longer view content on Apple devices using URLs but rather by downloading individual snippets of content into each developer's own <i>isolated content browser</i> on the user's iOS device. It wasn't just the developers who were baffled. It was the users too! It took a lot of marketing on Apple's part to get people familiar with an entirely new frame of thinking. They had to get people to stop going to their competitors tools to search the web, but instead to have them think "There's an app for that!" as the Apple advertising slogan went. Apple wasn't just trying to confuse the market with this strategy. There are benefits to sand-boxing (meaning to metaphorically isolate a play area from the clean environment around it) different content sources from each other from a privacy and security standpoint. That's what the different frames of AJAX web pages did also. This just took the sand-boxing to an extreme. Apple engineers knew they were going to have a challenging time safeguarding a good user experience on their new phones if there were risks of conflicting code from different programs accessing the same hardware elements at the same time. So the app construct allowed them to avoid phone crashes and bugs by not letting developers talk to each other inside the architecture. Making the developers learn an entirely new coding language to build these apps was also done with a positive intent. They introduced new context-specific frameworks and utilities that were specific to a user-on-the-go. These common frameworks provided consistency of user interface design that was specific to the Apple brand image. Also developers could save time time and cost if they did not need to create these common utilities and design elements from scratch. Theoretically a designer could build an app without the help of an engineer. An engineer could built an app without the help of a designer. It was an efficiency play to maximize participation by abstracting away the complexity of certain otherwise-mundane product concepts. <br /></p><p>Seeing the launch of
the iPhone capitalize on the concept of the personalized
dynamic web, along with the elements of location based content
discovery, I recognized that it was an outright rethinking of the potential of the web at
large. I had been working on the AT&T partnership with Yahoo at the
time when Apple had nominated AT&T to be the exclusive mobile carrier for the iPhone launch. Yahoo had done its best to bring access
to web content on low-end phones feature phones. These devices’ view of the web was incredibly
restricted. In some ways they felt like the AOL browser. You could get weather, stocks, sports scores, email, but otherwise you had difficulty navigating to anything that wasn't spoon-fed to you by your phone's carrier. </p><p>Collaborating with
Yahoo Japan, we'd stretched the limit of what a feature phone could do with mobile-ready web content. We must tip our hats to all that NTT Docomo “iMode” did with feature phone utilities before the touch-screen graphical user interface became mainstream. Japanese customers were amazingly versatile in adapting to the constraints of mobile phones. Users would write entire novels in episodic chapters for mobile phone consumption by commuters! But even at Yahoo we were falling over ourselves trying to help people re-format their sites to fit the small screen. It wasn't a scalable
approach though it bridged users through the transition as the web caught up.<br />
<br />
The
concept behind the iPhone was to present bite sized chunks of web content
to the phone specific to the user need in the moment. Breadth was not an advantage in a small screen space at a time when the user probably had limited time and attention. In theory, every existing webpage should
be able to render to the smaller screen without needing to be coded
uniquely. </p><p><a href="https://en.wikipedia.org/wiki/Hkon_Wium_Lie"></a></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjX-9FXPpN0OepmGXg0wYk_2OAZ-A_dMU6VrIETFVKtN-pZWvg_vCENEwINSoydpe8sKg6MY6r3CLKTRWO-Q2okzkGWM3CafAJOQ6KsgwcIGg02quEH87k83BummtY5kgExWa5InTrib4M/s2048/Ha%25CC%258Akon-Wium-Lie-2009-03.jpg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="2048" data-original-width="1834" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjX-9FXPpN0OepmGXg0wYk_2OAZ-A_dMU6VrIETFVKtN-pZWvg_vCENEwINSoydpe8sKg6MY6r3CLKTRWO-Q2okzkGWM3CafAJOQ6KsgwcIGg02quEH87k83BummtY5kgExWa5InTrib4M/w286-h320/Ha%25CC%258Akon-Wium-Lie-2009-03.jpg" width="286" /></a></div>Håkon Wium Lie had
created the idea of CSS (the cascading style sheet) which allowed an
html coded webpage to adapt to whatever size screen the user had. Steve
Jobs had espoused the idea that content rendered for the iPhone should
be written in html5. However, at the time of the phone’s release, many
websites had not yet adapted their site to that new standard means of
developing html to be agnostic of the device of the user. Web
developers were very focused on the then-dominant desktop personal
computer environment. While many web pioneers had sought to push the web
forward into new directions that html5 could enable, most developers
were not yet on board with those concepts. So the idea of the “native
mobile app” was pushed forward by Apple to ensure the iPhone had a
uniquely positive experience distinct from what every other
phone would see, a poorly-rendered version of a desktop-focused website.<p></p><p>I don't know any of the team at Apple who made these choices. But I have the sense that I understand the motives of what they were trying to do. In the developer community we have a term for "hacking" or "shimming" solutions to a code problem. This is generally meant in a positive connotation of making-do with what you have to achieve the outcome to address the user need. It's synonymous in our lingo with "improvising" but is not to be confused with the concept of malicious activity to undermine someone or something. (That is the general cultural adoption of the term.) When a hack or a shim is used to make a project fit the project or product's expected acceptance criteria, there is a general understanding that, once time allows, the shim will be removed for a more exact solution. So in my view, Apple shimmed the iPhone project and layered on the unwieldy scaffolding with the expectation and hope that over time those constructs could be removed from the code and replaced with a more user-friendly solution. <br /><br />
Mozilla engineers shared the concerns that Apple had about user pain points of using the web in a mobile context. (I'll speak generally here because there was no explicit company stance about Apple's approaches on this issue. But there was general sentiment that web developers needed some help on the html5 front.) Mozilla aspired to help the mobile adaptation of the web by conducting outreach on the ideas of "responsive web design" which is at the core of html5's purpose. This may sound esoteric. But what it means is that you should leverage your URL as if it is a conversation with the user. A user's browser transmits what's called a header in their http request to visit your site. Ideally, a webpage should listen to who's calling before it answers. If the call comes from a Safari browser on an iPhone, or a Chrome browser on an Android device, your site should be "responsive" in that it returns a mobile-styled format using CSS to respect the limits or specific elements of your page that are relevant to the user's context.<br /></p><p></p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_KkZbyCk_JOEhZ8kX9P27XEPltmtntlU2ACGxAW754bR9_x8GPQN7a6bY9TtKw2iR3eUJ80rHahFXJgOlG0k0mU_Sz1_Phe3sEoCqiPnowHzJPdCL6m1hV9871N2pOiPETxND6qQGHnE/s617/firefox-keon-mobile-phone.jpg" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="617" data-original-width="611" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_KkZbyCk_JOEhZ8kX9P27XEPltmtntlU2ACGxAW754bR9_x8GPQN7a6bY9TtKw2iR3eUJ80rHahFXJgOlG0k0mU_Sz1_Phe3sEoCqiPnowHzJPdCL6m1hV9871N2pOiPETxND6qQGHnE/w317-h320/firefox-keon-mobile-phone.jpg" width="317" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span style="font-size: xx-small;"><b>First Firefox OS Phone</b></span><br /></td></tr></tbody></table><br />The Mozilla team envisioned a phone unbound from the app ecosystem. Mozilla's Chief Technical Officer, Brendan Eich,
and his team of engineers decided that they could make a web phone using JavaScript and html5 without using the crutch of app-wrappers. The team took an atomized
view of all the elements of a phone and sought to develop a
web-interface (called an API for application program interface) to allow each element of the device to speak http web protocols
such that a developer could check battery life, status of motion,
gesture capture or other important signals relevant to the mobile user
that hadn't been utilized in the desktop web environment. And they succeeded! The phones launched in 28 countries around the world. <p></p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjTc4r3xvr_6QtZz5TcLzolZM7Pe_-qjIOPFkgu7-qh-oSUaD7QLcl-Ss3K5CVGBqkLnvh958mpJm5bk85fcQb29MmBYAZlu82ZIFMgCrxEguqxrrvTmTFjwupHJRURkUEp11YZRDBChYg/s857/ChristopherFISL.jpg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><br /></a></div><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj15B5BV1nDzt0HE7daUPJSFRH3037V8GDVnneVc1mcKUyj1xpirPcU4az7Hcsah5ZsE-oSu_9wVKQqWjQ9KO2gYuHbD_5GLfQV4h2cJBiYjg1cbI5Pgu_aGtw_6LRvtHfYHDg0bP5AKmg/s857/ChristopherFISL.jpg" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="572" data-original-width="857" height="213" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj15B5BV1nDzt0HE7daUPJSFRH3037V8GDVnneVc1mcKUyj1xpirPcU4az7Hcsah5ZsE-oSu_9wVKQqWjQ9KO2gYuHbD_5GLfQV4h2cJBiYjg1cbI5Pgu_aGtw_6LRvtHfYHDg0bP5AKmg/w320-h213/ChristopherFISL.jpg" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span style="font-size: xx-small;"><b>Christopher presenting at FISL 13</b></span><br /></td></tr></tbody></table>I
worked on the Brazilian market launch of the FirefoxOS phones, where there was dizzy enthusiasm
about the availability of a lower-cost smart phone based on open source
technology. As we prepared for the go-live of the launch in Brazil, the business team coordinated outreach through
the mobile carriers to announce availability (and
to prepare shelf space in carrier stores) for the new phones. Our events planning team coordinated speaker appearances for us at the developer conferences where we'd speak about html5 for mobile devices. This way we'd have dozens of major Brazilian portals and services optimized for viewing on mobile browsers in time for the upcoming launch.<br /><p></p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi0JdKmyyB_XUOYdzNKDH7KiXBLcJU_KwyUFqzD8Xt1LZWTTdZBenhOOunAqj9Idq4EHFJyLDVQ4bWxQPlKQKcZszHS4zQs43cmciygjIGqjv_yN00599F167zBhtrARxQIDQDSRvPXnCI/s1962/Fabio.JPG" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1367" data-original-width="1962" height="223" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi0JdKmyyB_XUOYdzNKDH7KiXBLcJU_KwyUFqzD8Xt1LZWTTdZBenhOOunAqj9Idq4EHFJyLDVQ4bWxQPlKQKcZszHS4zQs43cmciygjIGqjv_yN00599F167zBhtrARxQIDQDSRvPXnCI/w320-h223/Fabio.JPG" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><b><span style="font-size: xx-small;">Fabio Magnoni discussing Firefox OS in Brazil</span></b><br /></td></tr></tbody></table>Mozilla contributors in Brazil reached out to the largest websites in the
country to consent to listing their sites as web-apps on the
new devices. Typically, when you buy a computer, web services and content
publishers aren’t <i>on</i> the device, content publishers are just <i>accessible</i>
via the device’s browsers. But the iPhone and Android trend of
specific apps for web content was so embedded in people’s thinking that
many site owners thought they needed to do something special to be able
to provide content and services to our phone’s users. Mozilla therefore
borrowed the concept of a “marketplace” which was a web-index of sites
that had posted their site’s availability to FirefoxOS phone users. Users then put bookmarks to the web-apps on their home screen much like people were used to in Apple and Android devices. Later Apple Safari and Google Chrome also pushed for home-screen bookmarks in lieu of native apps. But the conventional behavior of users buying software constantly instead of relying on the same developer's website continues to this day.<br />
<br />
Mozilla’s engineers suggested that there shouldn’t be
the concept of a “mobile web” that was distinct or limited from the broader web. We should do everything we could to
persuade web developers and content publishers to embrace mobile devices
as “1st class citizens of the web.” So they hearkened back to the
concepts in CSS, and championed the concept of device-aware <a href="https://en.wikipedia.org/wiki/Responsive_web_design">responsive web design</a> with a moniker of “<a href="https://en.wikipedia.org/wiki/Progressive_web_application">Progressive Web Apps</a>.”
The PWA concept is not a new architecture. It’s the idea that a
mobile-enhanced URL should be able to do certain things that a
phone-wielding user expects it to do. Even though Mozilla discontinued the phone project eventually, the PWA work is being heavily
championed by Google for the Android device ecosystem now.<br />
<br />
<p></p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhYDNTdfWPvNngRJWMftWghSHRmFpELiW-LsgSrDdmrt6Tzi1JtDdc-N_hpczXcds0a106Oz74VHr8zMit0hvy-vh0Yw-EKm_dZ09P2_hpPG4HI9vUy-iT5iHb6I00ktgEOzGSaYVWT4Nk/s841/PanasonicTV.png" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="490" data-original-width="841" height="233" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhYDNTdfWPvNngRJWMftWghSHRmFpELiW-LsgSrDdmrt6Tzi1JtDdc-N_hpczXcds0a106Oz74VHr8zMit0hvy-vh0Yw-EKm_dZ09P2_hpPG4HI9vUy-iT5iHb6I00ktgEOzGSaYVWT4Nk/w400-h233/PanasonicTV.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span style="font-size: x-small;"><span style="font-size: xx-small;"><b>The first Firefox OS powered TV demonstrated in Barcelona</b></span><br /></span></td></tr></tbody></table>After the launch
of the phone, because Mozilla open sources its code, many other
companies picked up and furthered the vision. Panasonic forked the code to power their SmartTV, which was able to surf the web without needing an attached computer. Kickstarter campaigns launched to fork the code into a web-enabled smart-watches. New phone manufacturers forked the code to support feature phones, and various versions for RaspberryPi devices allowed the operating system to be used to power web-based picture frames and other devices for the "Internet of Things" developer initiatives. <p></p><p>The Mozilla html phone launch wasn't intended to overtake the major handset distribution. It was intended to make the point that web browsers were capable of doing the same things that an app wrapper would, so long as developers had access to the capabilities of the device over web protocols. This is one of the most admirable things about the philosophy of Mozilla and the open source community that supports it. An idea can
germinate in one mind, be implemented in code, then set free in the
community of open source enthusiasts who iterate and improve on it. There might be no PWA capability on iPhone and Android devices if Mozilla hadn't tried to launch a phone that had <i>only</i> PWAs. While the open sourcing of Netscape wasn’t the start
of the open source movement, it has contributed significantly to the practice. The
people who created the world wide web continue to operate under the
philosophy of extensibility. The founders of Google’s <a href="https://en.wikipedia.org/wiki/Chromium_(web_browser)">Chromium</a>
project were also keen Mozilla supporters, even though launching a separate open source browser made them competitors. (Remember the Netscape open sourcing had specifically been for the purpose of spurring competitors to action.) The fact that a different company,
with a different code base, created a similarly thriving open source
ecosystem of developers aiming to serve the same user needs as Mozilla’s
is the absolute point of what Mozilla’s founders set out to promote in the spin-off from Netscape. And it echoes those same sentiments I’d heard expressed back
in Washington, DC back in the late 1990s.<br />
<br />
One of the things
that I have studied a great deal,
was the concept of the US patent system. Back in the early days of the
US government, Secretary of State Jefferson created the concept of a
legal monopoly. It was established by law for the government first, then
expanded to the broader right of all citizens, and later all people
globally via the <a href="https://www.uspto.gov/">US Patent and Trademark Office</a>. When I was in college I had an invention that I wished to patent and produce for the
commercial market. My physics professor suggested that I not wait until I
finish my degree to pursue the project. He introduced me to another
famous inventor from my college and suggested I meet with him. Armed
with great advice I went to the USPTO to research prior art that might
relate to my invention. Upon thorough research, I learned that anyone
in the world can pursue a patent and be given a 17 year monopoly to protect the invention while the merits of the product-market-fit could be
tested. Thereafter, the granted patent would belong as open source, free
of royalties to the global community. “What!?” I thought. "I am to declare the
goods to the USPTO so they can give it away to all humanity shortly
thereafter once I did all the work to bring it to market? This
certainly didn’t seem like a very good deal for inventors in my view.
But it also went back to my learnings about why the government prefers
certain inefficiencies to propagate for the benefit of the greater
common good of society. It may be that Whirlpool invented a great
washing machine. But Whirlpool should only be able to monopolize that
invention for 17 years before the world at large should be able to reap
the benefits of the innovation without royalties due to the inventor.<br />
<br />
My
experiences with patents at Yahoo were also very informative. Yahoo
had regularly pursued patents, including for one of the projects I'd
launched in Japan. But their defense of patents had been largely in the
vein of the “right to operate” concept in a space where their products
were similar to those of other companies. I believed
that the behaviors of Yahoo, AOL and Google were particularly generous
and lenient in patent protection. As an inventor myself, I was impressed with how the
innovators of Silicon Valley, for the most part, did not pursue legal
action against each other. It seemed they actually promoted the
iteration upon their past patents. I took away from this that Silicon
Valley is more innovation focused than business focused. When I launched
my own company, I asked a local venture capitalist whether I should
pursue patents for a couple of the products I was working on. The
gentleman, who was a partner at the firm said, “I prefer
action over patents. Execute your business vision and prove the market
value. Execution is more valuable than ideas. I’d rather invest in someone who can execute rather than someone who can just invent.” And from the 20 years I’ve seen here,
it always seems to be the fast follower rather than the inventor who
gets ahead, probably precisely because they focus on jumping directly to
execution rather than spending time scrawling protections and
illustrations with lawyers.<br />
<br />
Mozilla has, in the time I’ve worked
with them, focused on implementing first in the open, without thinking
that an idea needed to be protected separately. Open source code exists
to be replicated, shared and improved. When AOL and the Mozilla team open sourced the code for Netscape, it was essentially opening the
patent chest of Netscape intellectual property for the
benefit of all small developers who might wish to launch a browser
without the cumbersome process of watching out for the licenses for the
code base. Bogging down developers with patent encumbered code
would slow those developers from seeking to introduce their own unique
innovations. Watching a global market launch of a new mobile phone based
on entirely open source code was a phenomenal era to witness. And it
showed me that the benevolent community of Silicon Valley’s innovators
had a vision much akin to those of the people I’d witness in Washington,
DC. But this time I’d seen it architected by the intentional acts of
thousands of generous forward-thinking innovators rather than
through the act of legislation or legal prompting of politicians.<br />
<br />
<b>The Web Disappears Behind Social Silos</b><br />
<br />
The
web 2.0 era, with its dynamically assembled web pages, was a tremendous
advance for the ability of web developers to streamline user
experiences. A page mashed-up of many different sources could enhance
the user’s ability to navigate across troves of information that would
take a considerable amount of time to click and scroll through. But
something is often lost when something else is gained. One of the boons of the web 1.0 era was the idea that a website was
relatively static set of components that were hosted on a URL on a given
day. So my search engine company Looksmart could have a fairly
authoritative index of the entire internet as it would look to any user
across the world. One fascinating project that continues to this day is a
search engine that retains a static view of what every major URL looked
like on the day it was indexed. So today you can see what Mozilla's
webpage <a href="http://web.archive.org/web/19990116225540/http://mozilla.org/">looked like in 1999</a>.
I often wondered what the advent of web 2.0 would do to the Internet
Archive. In 100 years would it seem that history of web archiving
stopped abruptly when users went from static html to dynamic
profile-based page loads? One thing was for certain, it seemed that
there would no longer be a consistent view of the web across the world.
Users in certain countries are now excluded from access to content that
the same user would be able to see if their IP address was determined to
be a local to the site host. In many cases this has nothing to do with
the opinion of the local government. Rather it had to do with whether
the specific site host believes it is important to make content
available to those outside the country or perhaps a deliberate decision
to exclude international users because the site hosting costs were too
great to support them, or because of some content licensing restriction
for content featured on their domain. This is a bit of a sad impact to
the freedom of information access globally, especially to people in
countries where it is hard to get access to content locally. But it is
also a kind of degradation of what the web's pioneers had intended.
Retrospectively, history most likely show a more narrowed web in the
archive as all the dynamic pages will not be saved. What happened there
is robustly logged and tracked in the moment. But it won't be
reassembled for viewing in the future. It is now a much more ephemeral
art.<br /><br />When Twitter
introduced its micro-blog platform, end users of the web were able to
publicize content they curated from across the web much faster than
having to author full blog posts and web pages about content they sought
to collate and share. Initially, the Twitter founders maintained an
open platform where content could be mashed-up and integrated into other
web pages and applications. Thousands of great new visions and
utilities were built upon the idea of the open publishing backbone it
enabled. My own company and several of my employers also built tools
leveraging this open architecture before the infamous shuttering of what
the industry called “<a href="https://www.forbes.com/sites/benkepes/2015/04/11/how-to-kill-your-ecosystem-twitter-pulls-an-evil-move-with-its-firehose">The Twitter Firehose</a>”.
But it portended a phase shift yet again of the very early social web. The Twitter we knew of became a diaspora of
sorts as access to the firehose feed was locked down under identity
protecting logins. This may be a great boon to those seeking anonymity
and small “walled garden” circles of friends. But it was not
particularly good for what may of the innovators of web 2.0 era hoped
for, the greater enfranchisement of web citizenry.<br /><br />The
early pioneers of the web wanted to foster an ecosystem where all
content linked on the web could be accessible to all, without hurdles on
the path that delayed users or obscured content from being sharable.
Just as the app-ified web of the smartphone era cordoned off chunks of
web content that could be gated by a paywall, the social web went into a
further splitting of factions as the login walls descended around
environments that users had previously been able to easily post, view and share.<br /><br />The parts of the developer industry that
weren’t mourning the loss of the open-web components of this great
social fragmentation were complaining about psychological problems that
emerged once-removed from the underlying cause. Fears of censorship and
filter-bubbles spread through the news. The idea that web citizens now
had to go out and carefully curate their friends and followers led to
psychologists criticizing the effect of social isolation on one side and
the risks of altering the way we create genuine off-line friendships on
the other.<br /><br /></p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhZSZ8Ml_w4gYO9oHXvfXq1UuNN2wEQiMVLTqqLttVSzLl8pqVJRK7Y28eJ4v6DqYQ5N1naPl_E6mx9nPkrgjSEzb_WEsuK1EAoygdp7QNpFmOb4Zcdg9YjO1aKARdhxhgWV2EiVpGlMHE/s1920/WebRTC+Chat.png" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1114" data-original-width="1920" height="186" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhZSZ8Ml_w4gYO9oHXvfXq1UuNN2wEQiMVLTqqLttVSzLl8pqVJRK7Y28eJ4v6DqYQ5N1naPl_E6mx9nPkrgjSEzb_WEsuK1EAoygdp7QNpFmOb4Zcdg9YjO1aKARdhxhgWV2EiVpGlMHE/w320-h186/WebRTC+Chat.png" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span style="font-size: xx-small;"><b>Hugh Finnan and Todd Simpson demo browser WebRTC</b></span><br /></td></tr></tbody></table>Mozilla had a couple of its own forays into social tools as well. Firefox team integrated a utility that allowed for in-browser chat sessions over a new privacy-preserving protocol called WebRTC (abbreviation for Real Time Communication) and collaborated with Chrome developer team such that WebRTC sessions between Firefox and Chrome enabled any user to connect between the two browsers without having to download a chat app. This concept seemed to work so well that the Firefox team decided to deploy a built-in communications utility to enable quick and convenient WebRTC calls from Firefox to any of the other modern web browsers. Branded as “Firefox Hello” the utility enabled free web streaming conversations like those offered by the Skype application.<p></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgYLy_u2cds_bN29mjI6yvvkwJAUAMCp8EjliAnS0JQfft_2M5DpzPkhyphenhyphenoLU_b4fujiCWseQTSTTEt_HlgluKDcHjKD1FcPuEG-A1DdPTHHng5DWN4P08b_0y0OoZfbp-88soSmMiqNGH8/s1728/FirefoxHello.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1210" data-original-width="1728" height="224" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgYLy_u2cds_bN29mjI6yvvkwJAUAMCp8EjliAnS0JQfft_2M5DpzPkhyphenhyphenoLU_b4fujiCWseQTSTTEt_HlgluKDcHjKD1FcPuEG-A1DdPTHHng5DWN4P08b_0y0OoZfbp-88soSmMiqNGH8/w320-h224/FirefoxHello.png" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span style="font-size: x-small;">Shared web session in Firefox Hello</span><br /></td></tr></tbody></table>Firefox Hello didn’t have the concept that your contacts needed to be using the same software as you, so long as their browser also was current and supported WebRTC links. Because it was ubiquitously available, we didn’t need to have a built in address-book. Typically, native address books in applications are used because only those people with the same software as you are compatible as contacts. But if absolutely everyone can use the general internet, there is no need for narrowly defined contact lists.Exclusive networks are advantageous for marketing purposes. For instance, social networks like to solicit users to join certain “clubs” sheerly because other of their friends are doing so. But this often doesn’t come at a specific benefit to the new user. Those might be seen as vectors to “lock in” to grow audiences for a product, a strategy we’d seen rife in app development these days. But Mozilla is more enthusiastic to see software utilities grow by virtue of their benefits, not their limitations. Mozilla aspires to show the way to avoid the crutches of app-development such as the profligate porting of user data between multiple locations, which is cumbersome, potentially risky and hard to understand for most users.<p></p><p>Part of the benefit of WebRTC was that the connection over internet protocol enabled use cases beyond just staring at web camera images. Our users could share content in real time such as webpages they were viewing, pictures and potentially web streaming. But we didn’t have a concept of file storage or transfer. We knew that this was a particular pain point for our users. For instance if they wanted to share pictures, but couldn’t do it over a web viewing session in Firefox Hello, they typically had to be sent via an email host. Most email hosting services only permitted sharing a few pictures, because of message file limits. Even in Thunderbird users could only share as many megabytes in a message as their email host permitted.</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgWCwxztjNbLUK0dxj7XPbIUK_y_vdkreZjvmxJPKIXW8vZFuIpkemaAiL7l-70_9Okp1UN9IBVR1OUeeWCuO6kiTQCet6iyoE5EfVcNm5ogFM_IxdrydCII2BXkbXFlAKgjaLIRZxqtfk/s1100/Firefox-Send-in-Desktop-browser-and-Mobile-Web-browser-app-Digicular.png" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="797" data-original-width="1100" height="232" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgWCwxztjNbLUK0dxj7XPbIUK_y_vdkreZjvmxJPKIXW8vZFuIpkemaAiL7l-70_9Okp1UN9IBVR1OUeeWCuO6kiTQCet6iyoE5EfVcNm5ogFM_IxdrydCII2BXkbXFlAKgjaLIRZxqtfk/w320-h232/Firefox-Send-in-Desktop-browser-and-Mobile-Web-browser-app-Digicular.png" width="320" /></a></div>To address this, Mozilla launched Firefox Send, which solved the underlying issue of storage limitations that hindered the capacity of traditional webmail services or bloated the cost of cloud hosting services users would need to have for web sharing as work-arounds for sharing between users where Thunderbird or Outlook couldn’t assure the file transfer. <br /><p></p><p>The past decade has drastically changed the general concept of what is a social network when it comes to the web. Many people don't think of Mozilla specifically as a social network. But when the AOL team created the Bugzilla platform for open sourcing the code tree of Netscape, they essentially had created a highly-efficient purpose-built social network for collaboration in software. In our community and company we had discussions around codes of conduct to foster and enforce the well-being of our community via our social platforms. We had suffered incidents of trolling, inappropriate political commentary, inappropriate biased language and all the general problems that later arose in other web social platforms. But we dealt with them by engaging and dialoging about them with the community participants. We didn’t algorithmically try to solve something that was a fundamental human decency issue. We faced it directly with discussions about decency and agreements about our shared purpose. We viewed that we were custodians of a collaborative environment where people from all walks of life and all countries could come together and participate without barriers or behaviors that would discourage or intimidate open participation. Some people in the community opined that if Mozilla had continued expanding its social networking initiatives around community tools, communication and services, they might have had some good demonstrable examples of how to address ills that other public social networks face today.<br /></p><div class="separator" style="clear: both; text-align: center;"><a href="https://helplogger.blogspot.com/2014/02/share-blogger-posts-using-sharethis-buttons.html" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;" target="_blank"><img border="0" data-original-height="1136" data-original-width="738" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiPiFy51yRcGn5aCosOMdMQUtjx4EPj3J97mY7o_8XGgkDpGjRLjuqHgqsUgT76I_ztM0NV-huh9XElpXrr46jOvpRhDv3GMrvlpaCkpQBAJFbh_TP83vKWpKx9ChekvU8a4R7dNEK2IVw/s320/addthis.png" /></a></div><p>The movement of the web-based social network resulted in a new concept of content indexing and display in an atomized way. A personalized webpage looks radically different to every individual viewing it. Some of these shifts in content discovery were beneficial. But some of the personal data that powered these webpages led to data breaches and loss of trust in the internet community. Mozilla’s engineers thought they could do a great deal to improve this. One of the things the Mozilla team thought it could address was the silo and lock-in problem of web authentication.</p><p>The social web services introduced a Babel-like plethora of social logins that may have been a boon for the creation of quick login options for an emerging startup. But it came with a mix of problems for the web at the same time. Website authors had to go through particular effort to integrate multiple 3rd party code tools for these social logins to be used. So Mozilla focused on the developer-side problem. As the social web proliferated, webmasters were putting many bright icons on their websites with prompts to sign in with five or more different social utilities. It could lead to confusion because the site host never knew which social site a user had an identity registered through. So if a user visited once and logged in with Twitter, and another time accidentally logging in with Facebook, their personalized account content might be lost and irretrievable. <br /></p><p></p><p></p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjVB6MucgPCkMQ_BkeRM0_72WclPJWjpP3G7HVNtKmdSkJ61SgeMSnY6QtrM66xH0-QBwBjUXVVrDzJafNJuKg46P5gUq70rS0vkVg7LQ2T_-XWeNiGg74N7dWTo5b-sQtq9E3bLn4XUvU/s848/BrowserID.png" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="650" data-original-width="848" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjVB6MucgPCkMQ_BkeRM0_72WclPJWjpP3G7HVNtKmdSkJ61SgeMSnY6QtrM66xH0-QBwBjUXVVrDzJafNJuKg46P5gUq70rS0vkVg7LQ2T_-XWeNiGg74N7dWTo5b-sQtq9E3bLn4XUvU/s320/BrowserID.png" width="320" /></a></div>Mozilla's engineers thought this was
something a browser could help with. If a user stored a credential at the browser level, the user wouldn’t need
to be constantly peppered with options to authenticate with 10 different
social identities. By initial design, “<a href="http://lloyd.io/how-browserid-works" target="_blank">BrowserID</a>” allowed a user to store a logged-attribute from any user-nominated email host then assert that identity to a website that supported the OpenID architecture, which was the underlying mechanism of all those social logins. Firefox, or any derivative fork of its code, didn't transmit the identity the user chose to any central repository. It was part of Mozilla had a privacy-by-design principle for its product development. Our rule was not to transport user data to anyplace it didn't need to be. A key benefit of BrowserID was that it operated as a client-side tool keeping one's private data in the control of the user, and eliminating the excessive use of passwords that were beginning to be a point of privacy vulnerability on the web.<br /><p></p><p>This movement was well received initially, but many supporters
called for a more prominent architecture to show the user where their logged
identities were being stored. So Mozilla morphed the concept from
a host-agnostic tool to the idea of a client-agnostic "<a href="https://accounts.firefox.com" target="_blank">Firefox Account</a>" tool that could be accessed
across all the different copies of Firefox a user had (on PCs, Phones,
TVs or wherever they browsed the web) and even be used in other browsers or apps outside of Mozilla's direct control. With this cloud service infrastructure users could synchronize their application preferences across all their devices with highly
reliable cryptography to ensure data could never be intercepted between
any two paired devices. <br />
<br />The other great inconvenience of the social web was the number of steps necessary for users to communicate conventional web content on the social web. Users would have to copy and paste URLs between browser windows or tabs if they wished to comment or share web content. There was of course the approach of embedding social hooks into every webpage in the world one might want to share as a work-around. But that would require web developers for every web site to put in a piece of JavaScript that users could click with a button to upvote or forward content. If every developer in fact did that, it would bog down the page loads across the entire internet with lots of extraneous code that had to be loaded from public web servers. Hugely inefficient, as well as being a potential privacy risk. Turning every webpage into a hodge-podge of promotions for the then-popular social network didn’t seem like and elegant solution to Mozilla’s engineers. It seemed to be more of a web plumbing issue that could be abstracted to the browser level.<br />
<br />
Fortunately, this was obvious to a large number of the web
development community as well. So the Mozilla team along with Google's Chrome team put their heads together on a solution
that we could standardize across web browsers so that every single
website in the world didn’t have to code in a solution that was unique
to every single social service provider. The importance of this was
also accentuated when the website of the United States government integrated a social engagement platform called AddThis, that was found to be
tracking the visits of people who visited the web page with tiny code
snippets that the government's web developers hadn’t engineered. People
generally like the idea of privacy when visiting web pages. The
idea that a visit to read what the president had to say came along with a
tradeoff that readers were going to be subsequently tracked off of that specific site later appeared particularly risky.<br />
<br />To enable a more privacy-protecting web, yet enable the convenience users sought of engaging with social utilities, Mozilla Labs borrowed a concept from the Progressive Web-App initiative coming out of the Firefox phone work. PWAs utilized the concept of a user “intent.” Just as a thermostat in a house expresses the thermostat’s setting as a “call for heat” from the house’s furnace, there needed to be a means for a user to have an “intent to share” that could be simply expressed at the interface level of the browser to avoid needing anything altered in the page. Phone manufacturers had enabled a similar concept of sharing at the operating system level for the applications that users downloaded, each with its own embedded sharing API, a browser needed to have the same capability. <br /></p><p></p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi0sRI8SYIxTTLE97b2yyvtZGhVKhcIpp2jy3NDId4g1q4nnrMtxblHMvLz37EfU8eWkS694DR7Wm7ctl0Ot-n7yIapetqEle0fRa5SKCoJ9N44F6ZgNS8lCVXP5DwnnYnzPgz-VpTqITg/s2232/Services+for+Firefox.png" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="2232" data-original-width="1041" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi0sRI8SYIxTTLE97b2yyvtZGhVKhcIpp2jy3NDId4g1q4nnrMtxblHMvLz37EfU8eWkS694DR7Wm7ctl0Ot-n7yIapetqEle0fRa5SKCoJ9N44F6ZgNS8lCVXP5DwnnYnzPgz-VpTqITg/w298-h640/Services+for+Firefox.png" width="298" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><b><span style="font-size: xx-small;">Social API implementation demonstration page<br /></span></b></td></tr></tbody></table><p>To achieve this in the browser, we used the concept of a “Social API.” An API (application program interface) is a kind of digital socket or “hand-shake” protocol that can mediate between programs when the user starts a push or pull request. As an example, Thunderbird email software can interface effortlessly with an email host account (such as gmail or hotmail) using an API, if the user of the software authenticates for their program to make such requests on their behalf. It is similar to a browser extension, but it could be achieved without pushing software to the user.<br /></p><p>Just as Firefox Accounts could sync services on behalf of a user without knowing any of the details of the accounts the user requested to sync, so too should it be able to recognize when a user wants to share something without having the user dance around between browser windows with copy and paste. </p><p>So Mozilla explored the concept of browsers supporting “share intents” just like the vendors of phones and PCs did at the time to support the convenience of social utilities. Because many of these services had the concept of notifications, we explored the "publisher side" intents as well. The user convenience introduced by doing this meant that they didn’t have to always be logged into their social media accounts in order to be notified if something required their attention or to receive an inbound message. </p><p>At the time, there was a highly-marketed trend in Silicon Valley around the idea of “gamification” in mobile apps. This was a concept that web developers could give you points and rewards to try to drive loyalty and return visits among web users. Notifications were heralded by some as a great way to drive the sense of delight for visitors of your website along with the opportunity to lure them back for more of whatever you offered. We wondered, “Would developers over-notify?” to try to drive traffic to their site at an attention burden cost to the user. There was a potential for over-saturation and distraction of user attention which could be a worse cost to the user’s attention and time than it was a benefit for them. <br /><br />
Fortunately,
we did not see huge notification abuses from the sites that supported
Social API. We did receive widespread interest from the likes of
Facebook, Twitter, Yahoo, Google which were the major messaging service
providers of the day. And so we jointly worked to up-level this to the
web standards body called the World Wide Web Consortium (abbreviated as
the <a href="https://www.w3.org/">W3C</a>) for promotion outside the
Firefox ecosystem, so that it could be used across all web browsers which supported W3C standards.<br />
<br />Working with this team I learned a great deal from my peers in the engineering organization. First I thought, if this is such a great idea, why doesn’t Firefox try to make this a unique selling point of our software? What’s the rush to standardize this? Jiminy Cricket voices across the organization pointed out that the goal of our implementation of open source code in the browser is precisely to have others adopt the greatest ideas and innovate upon them rather than to hold them dear. The purpose of the standards organizations we worked with was specifically to pass on those innovations so that everyone else could utilize them without having to adopt Firefox-specific code. Good ideas, like the USPTO’s concept of eventual dissemination to the broader global community, are meant to spread to the entire ecosystem so that webmasters avoid the pitfall of coding their website to the functionality of a single piece of software or web browser. Mozilla engineers saw their mission as in part being to champion web-compatibility. (Often shortened to webcompat in our discussions.) Firefox is a massive population of addressable users. But we want web developers to code for all users to have a consistently great experience of the web, not just our audience of users. There is a broad group of engineers across Microsoft, Google, Apple, Samsung, Mozilla and many small software developers who lay down the flags of their respective companies and band together in the standards bodies to dream of a future internet beyond the capability of the software and web we have today. They do this with a sense of commitment to the future we are creating for the next generation of internet, software and hardware developers who are going to follow our footsteps after us. Just as we inherited code, process and standards from our forebears, it is the yoke of our current responsibility to pass on the baton without being hampered by the partisanship of our competing companies. We have to build the web today for future generations which are going to be set up for success in facing new demands of the technology environment we will create for tomorrow. </p><p>Over the course of the Social API project, Firefox, Chrome and the new Edge browser of Microsoft would implement different models of addressing the use case of the target issue we were solving for. Over time, we agreed to ways that the tools could be standardized and we then upgraded our software to perform in a consistent way between our products on different operating systems.</p><p>During the early craze about social media, there were a lot of critics about the ease of sharing, sometimes potentially private information, for users that were perhaps unfamiliar with the risks of privacy exposure on such platforms. Facebook had been commissioning studies on the psychological benefits and pitfalls of social media use. During one of our company all-hands I posed this issue to one of my mentors, a Mozillian engineer named Shane Caraveo. “Shouldn’t we be championing people to go off these social platforms to build their own web pages instead of facilitating greater usage of the conveniences of social tools?” Shane pointed out that Mozilla does do that through the educational tools on the Mozilla Developer Network, which demonstrates with code examples exactly how to build your own website. Then Shane made a comment that has sat with me for years after. “I don’t care how people create content and share it on the internet. I care that they do.”<br />
<br />
<b>The First Acquisition</b><br />
<br />The standardization of web notifications across browsers one of the big wins of our project. The other, for Mozilla specifically, was the acquisition of the Pocket content aggregation and recommendation engine. When I worked at Yahoo, one of the first acquisitions they had made was the bookmark backup and sharing service del.icio.us. Our Yahoo team had seen the spread of the social web-sharing trend as one of the greatest new potentials for web publishers to disseminate their creations and content. They had built their own social networks called Yahoo360 and Yahoo Answers and had acquired other social sharing platforms including upcoming.org and flickr.com during the same period. These social sharing vectors bolstered the visibility of web content by earning the praise and inspiring the subsequent desire to “re-post” content among circles of friends. Many years later, Yahoo sold the cloud bookmarking business to the founder of YouTube who sought to rekindle the idea, and created a SocialAPI socket for Firefox. </p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEita-8UbaQ-uYsAdr8W3f_jt79VWWd2kPHIFDQWJyzmned_LuhqRo9FbRp40B0e5mGvIaiL_pMhDqnToAWcudKF-0pNyZJBn8hyphenhyphenxz-nWHZOGto8upVArCkf4r_iFKm7RtsdEejzxD8Z80U/s500/NateWeiner.jpg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="500" data-original-width="500" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEita-8UbaQ-uYsAdr8W3f_jt79VWWd2kPHIFDQWJyzmned_LuhqRo9FbRp40B0e5mGvIaiL_pMhDqnToAWcudKF-0pNyZJBn8hyphenhyphenxz-nWHZOGto8upVArCkf4r_iFKm7RtsdEejzxD8Z80U/w200-h200/NateWeiner.jpg" width="200" /></a></div>Another entrepreneur, Nate Weiner, had taken a different approach to addressing the web-archiving need of his product's users. His service, called Pocket, just allowed the quick-archiving and labeling of web content in a user's personal account. Pocket users would have their own collection of web content with them for the future. Nate built browser extensions just like del.icio.us and also implemented SocialAPI. But its popularity seemed to hinge on its mobile and tablet apps. <br /><p></p><p></p><p>Saving web content may seem like a particularly fringe use case for only the most avid web users. But the Pocket service received considerable demand. With funding from Google’s venture investing arm among others, Nate was able to grow Pocket considerably, and even expand into a destination website where users could browse the recommendations of saved content from other users in a small tight-knit group of avid curator/users. It was perhaps the decentralization of Pocket’s approach that made it work so well. The article saving behaviors of millions of Pocket users produced a list of high quality content that was in no way influenced by the marketing strategies of the news publishing industry. It served as a kind of barometer of popular opinion about what was trending across the web. But not just for what was popular, rather what people wanted to save for their own reference later.<br /><br /></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhqSGbL2GqUVHokBmlZw7xQVkTHPkK7U9DHPFMGwez0HvxHuWC9VQitflr7YqTC1WgCSc1ZoWIbiTL-llH9yq-aBMc6Ldk9zo4J_KmqHJOJ5-PlUMKp_TFoGQEEyM5oT2iGbomJy6HQYvA/s2048/PocketArticleView.png" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="2015" data-original-width="2048" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhqSGbL2GqUVHokBmlZw7xQVkTHPkK7U9DHPFMGwez0HvxHuWC9VQitflr7YqTC1WgCSc1ZoWIbiTL-llH9yq-aBMc6Ldk9zo4J_KmqHJOJ5-PlUMKp_TFoGQEEyM5oT2iGbomJy6HQYvA/s320/PocketArticleView.png" width="320" /></a></div>When I first met the Pocket team, they commented that their platform was not inherently social. So the constraints of the Social API architecture didn’t fit the needs of their users. They suggested that we create a separate concept around “save intents” that were not fitting in the constraints of social media intents that the phones and services were pursuing at the time. When Firefox introduced the “Save to Pocket” function in our own browser chrome, it seemed to be a combination of the concepts of “Save to Bookmarks” (browser client side storage) plus Firefox Accounts “Sync” feature. But we found that a tremendous number of users were keen on Pocket save function even more than the sync-bookmarks architecture we already had in the browser. <br /><p></p><p>Because Google has already invested in Pocket, I had thought that it was more likely that they would join the Chrome team eventually. But by a stroke of good fortune, the Pocket team had a very good experience with working alongside the Mozilla team and decided that they preferred to join Mozilla to pursue the growth of their web services. This was the first acquisition Mozilla had executed. Because I had seen how acquisition integrations sometimes fared in Silicon Valley, I had some fascination to see how Mozilla would operate another company with its own unique culture. </p><p>Mozilla post-acquisition used the Pocket platform as a content sharing and discovery tool within the new tab portion of the browser so that when Firefox users spawned a new page, they could see recommendations from the peer community of Firefox users much in a similar fashion to how the original Netscape browser featured recommended websites in its Open Directory, curated by Netscape users 20 years prior. Mozilla didn’t discontinue Pocket initiatives outside of Firefox after the acquisition and integration. Pocket continues to support all browsers that compete with Firefox still today. The community of Pocket users and contributors continues to stay robust and active even beyond the existing Firefox user base.<br />
<br />
<b>Emerging Technologies</b>
<br /></p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhPdwqYCRJTjgTdSzqvd2Jwdv7lpAkYsseVIbc43CjihAvsl6IAhyx1ALKcr3YLBhM0W7G9EszVfw8aME6EezFT58Vs31w_4bR66-AyWFs3WDU7Lzz9T4S2GtADIqL1L8fQeA4pmGJtWkU/s2048/Hubs.png" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1134" data-original-width="2048" height="221" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhPdwqYCRJTjgTdSzqvd2Jwdv7lpAkYsseVIbc43CjihAvsl6IAhyx1ALKcr3YLBhM0W7G9EszVfw8aME6EezFT58Vs31w_4bR66-AyWFs3WDU7Lzz9T4S2GtADIqL1L8fQeA4pmGJtWkU/w400-h221/Hubs.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span style="font-size: xx-small;"><b>Mozilla Hubs VR development platform<br /></b></span></td></tr></tbody></table><p>Mozilla's Emerging Technology team evaluates new opportunities that web technology can be applied to next-generation challenges. They do this so that the web remains a preferred option for developing future tools that might otherwise be app based. As an example, as developers begin to build 3D Virtual Reality content experiences, Mozilla’s engineers realized that there needed to be an expanded means for 3D content to be served on the web without needing complex software downloads. So they created a means of using JavaScript to embed 3D graphics inside of web pages or serve 3D content rendered inside the browser itself, through an initiative called aframe.io. They also launched a web development utility called Hubs for first-time developers to become familiar with building 3D spatial and audio landscapes inside a simple drag-and-drop dashboard tool, complete with free hosting for developers who wished to host small events online in the simulated 3D chat rooms. For larger events such as the Augmented World Expo and the Mozilla all-hands meetings, they built out conference-sized deployment capability for the Hubs platform and would host their own developer events in the virtual space as well.</p><p>Aside from visual aspects of the expanding internet, the trend of using voice-based robot assistants began to gain momentum as Apple, Google and Amazon each launched their own audio concierge tools. (They were Siri, Hey Google and Alexa respectively.) Mozilla has a large group of engineers who specifically work on tools for content-accessibility for people who have sight or hearing impairments. So expanding the ability for web services to read content to users without a screen, or to listen to user prompts, became an area of central focus for Mozilla Emerging Technology team.</p><p></p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjMY3pc99fCwOHfbQZbQ5DtKD1XI33CWMNq85P9RF6VuEZqY1r2Mhlkyx-8T-oZN4FgfHX3S2nAKEB-eK_ywNeToIUrt1AlQFBTJMILnnE8LbW3KPIYnDaq9ATWfU0fMhreIFCkr5xFX1k/s436/Andre.jpeg" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="325" data-original-width="436" height="228" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjMY3pc99fCwOHfbQZbQ5DtKD1XI33CWMNq85P9RF6VuEZqY1r2Mhlkyx-8T-oZN4FgfHX3S2nAKEB-eK_ywNeToIUrt1AlQFBTJMILnnE8LbW3KPIYnDaq9ATWfU0fMhreIFCkr5xFX1k/w305-h228/Andre.jpeg" title="Andre Natal presents at Mozilla" width="305" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><b><span style="font-size: x-small;">Andre Natal presents on Voice APIs at Mozilla</span></b><br /></td></tr></tbody></table>During my time in Brazil launching the FirefoxOS phone, I attended a conference called FISL which brought together hundreds of developers working in open source projects. One of the engineers there presented an app that he'd built which could give speech based directions to anyone by using spoken questions. I asked him to consider adapting his web service for FirefoxOS phones for navigation. Going beyond just doing that, Andre decided to join our Emerging Technologies team, where he developed the first "Voice Search" extension for Firefox desktop users.<p></p><p></p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiETSPZW7YcXBPiqQapzX_ts0egTD6WlzZQWK8PR0jiluhyVQb9zr7kXW7UWAHvUyadYAKlcl5p7H4gI82YbpkClEkRLkL5b0Qyu5UjZsUuQPL4O-wWoyuoNqxGi5nrAZlkNg21GhzJAb4/s591/ReadToMe.png" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="335" data-original-width="591" height="227" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiETSPZW7YcXBPiqQapzX_ts0egTD6WlzZQWK8PR0jiluhyVQb9zr7kXW7UWAHvUyadYAKlcl5p7H4gI82YbpkClEkRLkL5b0Qyu5UjZsUuQPL4O-wWoyuoNqxGi5nrAZlkNg21GhzJAb4/w400-h227/ReadToMe.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span style="font-size: xx-small;"><b>"Read Page" built into Firefox Reader Mode<br /></b></span></td></tr></tbody></table><p> </p><p>Parallel to listening to voice commands in Mozilla's products, Mozilla was actively working on allowing the Firefox browser and Pocket apps to read narrative text to the user. In Firefox, all the user has to do is click into "Reader Mode" in the toolbar to see the speech prompt at the left of the browser frame. There a user can select from various spoken accents that you wished Firefox to use in the narration.</p><p><br /></p><p></p><p></p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhYztnKmrByEwQQUDtq4hLkvXp1nI0UdvGNWLYXVS4uQ1VFh5ZGWcerMVm9t1CKLG1eR5ms_q1ilnm7sHCaq50vFruSjOJrVrBVTXHCcAp5FfRzcgvnwTkniLTvxJnzDnaXsuXPr0KpoJM/s767/CommonVoice.png" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="562" data-original-width="767" height="235" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhYztnKmrByEwQQUDtq4hLkvXp1nI0UdvGNWLYXVS4uQ1VFh5ZGWcerMVm9t1CKLG1eR5ms_q1ilnm7sHCaq50vFruSjOJrVrBVTXHCcAp5FfRzcgvnwTkniLTvxJnzDnaXsuXPr0KpoJM/w320-h235/CommonVoice.png" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span style="font-size: xx-small;"><b>Mozilla's Common Voice active-reading donation page<br /></b></span></td></tr></tbody></table><p>If a browser can read and listen, what do you do for an encore? Mozilla's answer to this was to make sure that developers could access raw open source audio and speech processing algorithms to make their own speech tools. As Mozilla had a large base of contributors who were excited to donate samples of their accents and speech style to the project, they created a portal where people from around the world could read sample text aloud, then to be validated for accuracy of intelligibility to another contributor of the same language. The Common Voice audio sample set is now one of the largest open source voice sample databases in the world.</p><p><br />
</p><p></p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjW_DJmCD8j1dDg75_gYcnk5yydDsDRjotyFsTe7iI_if3oL_uv985qmYT5_4fnsz9zudpuY5HpvephauuA1mivMeWz990mm-JhlJgOOAA04NvU25EfebhanM4b9eXoWGyYBkw2fMdHNPE/s2048/OKdoPackaging1.jpg" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1294" data-original-width="2048" height="202" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjW_DJmCD8j1dDg75_gYcnk5yydDsDRjotyFsTe7iI_if3oL_uv985qmYT5_4fnsz9zudpuY5HpvephauuA1mivMeWz990mm-JhlJgOOAA04NvU25EfebhanM4b9eXoWGyYBkw2fMdHNPE/w320-h202/OKdoPackaging1.jpg" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><b><span style="font-size: xx-small;">Mozilla WebThings Gateway distributed by OKdo<br /></span></b></td></tr></tbody></table><p>With the advent of new internet-ready hardware devices in the home, Mozilla saw a new opportunity to leverage the privacy concepts of web enabled device API from the FirefoxOS era to provide their own in-home router tools. This tool could act as a secure gateway to those devices instead of having each device connect independently over the web to sundry service vendors. The Emerging Technology team borrowed many of the concepts that stemmed from the PWA initiative to allow a person's home computer to serve as the primary interface for the dozens of home appliances a person might have. (WebThings Gateway could control lamps, speakers, TVs, doorbells and webcams.) This avoided the need to have dozens of separate applications to control the many different utilities you might have at home and made sure access to them was “fire-walled" from outside control. The benefit of doing this was that many of these devices were incompatible with each other. So if we were to use a common command language, such as http to communicate between the router and the devices we could enable coordinated behaviors for linked devices even if they were manufactured by different companies or utilized incompatible radio frequencies with the other devices in the room. Mozilla WebThings Gateway was distributed by original equipment manufacturer OKdo as kits that could be launched out of the box and configured in the user’s web browser. </p><p><br />
<b>Protection of Anonymity</b><br />
<br />One of the most fascinating industry-wide efforts I saw at Mozilla was the campaign behind protecting user anonymity requests and initiatives to enable pseudonymity for users. (Using a pseudonym means calling my website “ncubeeight” instead of “Christopher’s Page” for instance. The way Mark Twain authored books under his nom de plume rather than his birth name.) As social networking services proliferated in the Web 2.0 era, there were several mainstream services that sought to force users into a web experience where they could have only one single, externally verified, web identity. The policy was lambasted in the web community as a form of censorship, where internet authors were blocked from using pen-names and aliases.<br />
<br />
On
the flip side of the argument, proponents of the real-name policy
theorized that anonymity of web identities led to trolling behaviors in
social media, where people would be publicly criticized by anonymous
voices who could avoid reputational repercussions. This would, in
theory, let those anonymous voices say things about others that were not
constrained by the normal ethical decency pressures of daily society.<br />
<br />
Wired
magazine wrote editorial columns against real names policies saying
that users turn to the web to be whomever they want to be and express
anonymously ideas that they couldn't without multiple pen-names. A
person’s web identity (Sometimes referred to as “handles” from the early
CB radio practice of using declared identities in radio transmissions)
would allow them to be more creative than they otherwise would. One
opinion piece suggested that the web is where people go to be a Humpty
Dumpty assortment of diverse identities, not to be corralled together as
a single source of identity. I myself had used multiple handles for my
web pages. I wanted my music hobby websites, photography website and
business websites to all be distinct. In part, I didn’t want business
inquiries to be routed to my music website. And I didn’t want my
avocation to get tangled with my business either.<br />
<br />
European
governments jumped in to legislate the preservation of anonymity with
laws referred to as “Right to be forgotten” which would force internet
publishers to take down content if a user requested it. In a world
where content was already fragmented in a means detached from the
initial author, how could any web publisher comply with individual
requests for censorship? It wasn’t part of the web protocol to
disambiguate names across the broader internet. So reputation policing
in a decentralized content publishing ecosystem proved tremendously
complicated for web content hosts.<br />
<br />
Mozilla championed investigations, such as the <a href="https://coralproject.net/about/">Coral Project</a>,
to address the specific problems of internet trolling when it was
targeted to public commenting platforms on news sites. But as a
relatively small player in the broader market, it would have been
challenging to address a behavioral problem with open source code. A
broader issue was looming as a threat to Mozilla’s guiding principles.
The emergence of behaviorally-targeted advertising that spanned across
websites loomed as a significant threat to internet users’ right to
privacy.<br />
<br />
The founders of Mozilla had decided to pen a manifesto
of principles that they established to keep as the guiding framework for
how they would govern projects that they intended to sponsor in the
early days of the non-profit. (The full manifesto can be read here: <a href="https://www.mozilla.org/en-US/about/manifesto/">https://www.mozilla.org/en-US/about/manifesto/</a>)
In general, the developers of web software have the specific interests
of their end users at the forefront of their minds as their guiding light. They woo customers to their
services and compete with other developers and products by introducing new utilities that
contribute to the convenience and delight of their users. But sometimes
companies that make the core services we rely on have to
outsource some of the work they do to bring the services to us. With
advertising, this became a slippery slope of outsourcing. The
advertising ecosystem’s evolution in the face of the Web 2.0 emergence,
and the trade-offs publishers were making with regard to end-user
privacy, became too extreme for Mozilla’s comfort. Many outside Mozilla
also believed the compromises in privacy that were being made were
unacceptable from the standpoint of the assurances they wanted to offer their end-users. They were willing to band together with Mozilla to do something about it.<br />
<br />
While this
is a sensitive subject that raises ire for many people, I can
sympathize with the motivations of the various complicit parties that
contributed to the problem. As a web publisher myself, I had to think a
lot about how I wanted to bring my interesting content to my audience.
Web hosting cost increases with the volume of audience you wish to
entertain. The more people who read and streamed my articles, pictures,
music and video content, the more I would have to pay each month to
keep them happy and to keep the web servers running. All free web
hosting services came with compromises. So, eventually I decided to pay
my own server fees and incorporate advertising to offset those fees.<br />
<br />
Deciding
to post advertising on your website is a concession to give up
control. If you utilize an ad network with dynamic ad targeting, the
advertising platform makes the decision of what goods or services show
up on your web pages. When I was writing about drum traditions from
around the world, advertisers may think my website was about oil drums,
and it would show ads for steel barrels on my website. As a web
publisher, I winced. Oil barrels aren’t actually relevant to the people
who read about African drums. But it paid the bills, so I tolerated it.
And I thought my site visitors would forgive the inconvenience of seeing
oil barrels next to my drums.<br />
<br />
I was working at Yahoo when the
professed boon of behavioral advertising swept through the industry.
Instead of serving semantically derived keyword-matched ads for my drum
web page, suddenly I could allow the last webpage you visited to buy
“re-targeting” ads on my webpage to continue a more personally relevant
experience for you, replacing those oil barrel ads with offers from
sites that had been relevant to you in your personal journey yesterday,
regardless of what my website was about. This did result in the
unsightly side effect that products you purchased on an ecommerce site
would follow you around for months. But, it paid the bills. And it paid
better than the mis-targeted ads. So more webmasters started doing it.</p><p>Behaviorally targeted ads seemed like a slight improvement in a generally under-appreciated industry at the start. But because it worked so well, significant investment demand spurred ever more refined targeting platforms in the advertising technology industry. Internet users became increasingly uncomfortable with what they perceived as pervasive intrusions of their privacy. Early on, I remember thinking, “They’re not targeting me, they’re targeting people like me.” Because the ad targeting was approximate, not personal. I wasn’t overly concerned.<br />
<br />
One
day at Yahoo, I received a call. It had been escalated though their
customer support channels as a potential product issue. As I was the
responsible director in the product channel, they asked me if I would
talk to the customer. Usually, business directors don’t do customer
support directly. But as nobody was available to field the call, I did.
The customer was actually receiving inappropriate advertising in their
browser. It had nothing to do with a Yahoo hosted page which has
filters for such advertising. But it was caused by a tracking cookie
that the user, or someone who had used the user’s computer, had acquired
in a previous browsing session. I instructed the user how to clear
their cookie store on their browser, which was not a Yahoo browser
either, and the problem resolved. This experience made me take to heart
how deeply seriously people fear the perceived invasions of privacy
from internet platforms. The source of the problem had not been related
to my company. But this person had nobody to turn to to explain how web
pages work. And considering how rapidly the internet emerged, it dawned
on me that many people who’ve experienced the internet’s emergence in
their lifetime likely couldn’t have had a mentor or teacher tell them
about how these technologies worked.<br />
<br />
</p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhj9Eo-k2TQ3VwX7cLhz8sCiSDWLkZBqNnlQgSa0cMO_IEtyPUPvb_L0veb5RnO01zyeTIh3pLaUHGkCsViTePKUhv-G3iLOYYB7eiu8wNdZbTfWlhMkUsJdefTt6FF-7YSxaWE1nele38/s2306/Mozilla+Manifesto.png" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1054" data-original-width="2306" height="293" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhj9Eo-k2TQ3VwX7cLhz8sCiSDWLkZBqNnlQgSa0cMO_IEtyPUPvb_L0veb5RnO01zyeTIh3pLaUHGkCsViTePKUhv-G3iLOYYB7eiu8wNdZbTfWlhMkUsJdefTt6FF-7YSxaWE1nele38/w640-h293/Mozilla+Manifesto.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span style="font-size: xx-small;"><b>Mozilla's Manifesto of 10 principles<br /></b></span></td></tr></tbody></table>Journalists started to uncover some very unsettling stories about how ad targeting can actually become directly personal. Coupon offers on printed store receipts were revealing customers purchase behaviors which could highlight details of their personal life and even their health. Mozilla’s principle #4 argued that “Individuals’ security and privacy on the internet are fundamental and must not be treated as optional.” They decided to tackle the ills of personal data tracking on the web with the concept of a declaration that would be sent as part of the page-load “header” request. This is the “hand shake” process that a web browser does with a website on first page load. Mozilla asserted that if the tracking preferences of the user were declared up front, then it was clear to the site host how the user wanted advertising or tracking customized to that preference. And because all browsers use headers, this was a solution that could be implemented across all browsers in a common and transparent fashion.<p></p><p></p><p>Most savvy web users do know what
browser cookies are and where to find them, and how to clear them if
needed. But one of our security engineers pointed out to me that we
don’t want our customers to always be chasing down errant irritating
cookies and flushing their browser history compulsively. This was
friction, noise and inconvenience that the web was creating for the
web’s primary beneficiaries. The web browser, as the user’s delegated
agent, should be able to handle these irritations without wasting time
of their customers, causing them to hunt down pesky plumbing issues in
the preference settings of the software. The major browser makers
banded with Mozilla to try to eradicate this ill.<br />
<br />
At first it
started with a very simple tactic. The browser cookie had been invented
as a convenience for navigation purposes. If you visited a page and
wanted to navigate back to it, there should be a “back button” that lets
you find it without having to conduct another search. This was the
need the cookie solved. Every web page you visit sets a cookie if they
need to offer you some form of customization. Subsequently, advertisers
viewed a visit to their webpage as a kind of consent to be cookied,
even if the visit happened inside a browser frame, called an inline
frame (iframe). You visited Amazon previously, surely you’d want to come
back, they assumed. There should be a kind of explicit statement of
trust which had been described as an "opt-in" even though a visit to a
web destination was in no way a contract between a user and a host.
Session history seemed like a good implied vector to define trust.
Except that not all elements of a web page are served from a single
source. Single origin was a very Web 1.0 concept. Dynamically
aggregated web pages pulled content, code and cookies from dozens of
sources in a single page load in the modern web environment.<br />
<br />The environment of trust was deemed to be the 1st-party relationship between a site a user visits in their web browser and the browser “cookie cache.” This cache served as a temporary passive client-side history notation that could be used in short time frames for the browser to have a kind of bread crumb trail of sequential visits. (In case you want to go back to a previously visited URL that you hadn’t saved to bookmarks for instance.) Cookies and other history tracking elements could be served in iframe windows of the webpage. the portion of the web page that web designers “outsource” to external content calls. When cookies were sent to a browser cache in the passive iframe window, that wasn’t controlled by the site host, Firefox stored those with a notation attribute that they came from outside the 1st-party context of the site the user explicitly navigated to.</p><p>In the browser industry, Firefox and Safari teams wanted to quarantine and forcibly limit what cookies in this “3rd party context” could do. So we created policies that would limit the time such cookies could stay active after their first setting. We also introduced a feature in the browser that users could turn off the ability for certain sites to set 3rd party cookies at all. While this was controversial at first, Mozilla engaged with the developer and advertising technology companies to come up with alternative means to customize advertising that did not involve excessive dependencies on dropping cookies that could annoy the user.<br />
<br />Browser makers tended to standardize the code handling of web content across their separate platforms by the W3C or other working groups. And in order to create a standard, there had to be a reference architecture that multiple companies could implement and test. The first attempt at this was called “Do not track” or DNT for short. The DNT preference in Firefox, or any other browser, would be sent to each website when the first load happens. This seemed innocuous enough. It allowed the page host to remember the session as long as necessary to complete the session. Most viewed that the DNT setting in a browser was a simple enough statement of the trust environment between a web publisher and a visitor for the purpose of daily browser usage. <br />
</p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgBb4fLGbY6ouXAu4FM6AfZdoOpQ72BdlCMbxtfOiI_RyP9VKxKnLx61SagyTzyOGDlYfhpnvo-tnCmbGW7cOrg7ycqZtsBWTWWxaRGfp0Ob2fjC3lioCQB4Ni1FSKvURlXC65sA_AUKho/s1016/HarveyTestifies2.png" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="751" data-original-width="1016" height="296" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgBb4fLGbY6ouXAu4FM6AfZdoOpQ72BdlCMbxtfOiI_RyP9VKxKnLx61SagyTzyOGDlYfhpnvo-tnCmbGW7cOrg7ycqZtsBWTWWxaRGfp0Ob2fjC3lioCQB4Ni1FSKvURlXC65sA_AUKho/w400-h296/HarveyTestifies2.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span style="font-size: xx-small;"><b>Harvey Anderson testifies on DNT support across browsers in US Senate</b></span><br /></td></tr></tbody></table>All
the major browser vendors addressed the concern of the government
supervision with the concept that they should self-regulate. Meaning,
they should come to some general consensus that could be used across the
industry between browser, publishers and advertisers on how to best
serve people using their products and services without having to have
government legislators mandate how code should be written or operate.
Oddly, it didn’t work so well. Eventually, certain advertisers decided
to not honor the DNT header request. US Congress invited Mozilla general counsel, Harvey Anderson, to
discuss what was happening and why some browsers and advertising
companies decided to ignore the user preferences as stated by our shared
code in browser headers.<br />
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzmWIKV4CDiPS2Q_VXKLfOKnCMlrflF2GIAaHcopPx4sUh4JfKFOPjGKH0LUOb2AiyKuSz2YS2kH1hE7fzbIseB3U2E1_JiV7fnkYZw_1fj1unCgwfxebPJhsyRrZKiSf9EIL7LvxivoQ/s707/Mozilla_Lightbeam1.png" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="445" data-original-width="707" height="252" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzmWIKV4CDiPS2Q_VXKLfOKnCMlrflF2GIAaHcopPx4sUh4JfKFOPjGKH0LUOb2AiyKuSz2YS2kH1hE7fzbIseB3U2E1_JiV7fnkYZw_1fj1unCgwfxebPJhsyRrZKiSf9EIL7LvxivoQ/w400-h252/Mozilla_Lightbeam1.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span style="font-size: xx-small;"><b>Mozilla Lightbeam browser cookie affiliation tracker</b></span><br /></td></tr></tbody></table>Because the tracking utilities sites were using to track browser users, Mozilla sponsored a "Lightbeam" extension to let the user see all the trackers and relationships between the entities that were utilizing those trackers. You could see which sites would drop a targeting cookie in a single web session, and thereafter which other companies accessed that cookie in their own web page serving to the same browser. (Note this doesn't track the user, but rather the session-consistency across two different page loads.) Our efforts to work in open source via
DNT with the other industry parties was not ultimately protecting the
users from belligerent tracking. It resulted in a whack-a-mole issue of
what we referred to as "finger printing" where advertising companies
were re-targeting off of computer or phone hardware aspects or even off
the preference not to be tracked itself! It was a bit preposterous to
watch this happen across the industry and to hear the explanations by
those doing it. What was very inspiring to watch on the other side was
the efforts of the Mozilla product, policy and legal teams to push this
concern to the fore without asking for legislative intervention.
Ultimately, European and US regulators did decide to step in to create
legal frameworks to punitively address breaches of user privacy that
were enabled by the technology of an intermediary. Even after the
launch of the European <a href="https://en.wikipedia.org/wiki/General_Data_Protection_Regulation">GDPR</a>
regulatory framework, the ensuing scandals around lax handling of user
private data in internet services is now very widely publicized and at
the forefront of technology discussions and education.<br />
<p></p><p>Now the broader consumer industry is keenly aware of the privacy risks that can surface on the web. Open source web browsers are proving to be one of the best ways to protect users against risks they might encounter, through the sharing of best-practices and open policy and governance that is transparent across the ecosystem. Now dozens of browsers leverage the core rendering engines of WebKit, Chromium and Gecko as part of their underlying code. So should any new vulnerability be discovered, in any of them, all of the contributors to the various rendering engines can patch the vulnerability and pass the improved code down through heirs downstream.</p><p><b>How They Do It</b> <br /></p><p>You may be asking yourself how companies like Google, Apple and Mozilla develop their browsers in such a way that anyone can help fix them. The important part is openness. Just like the US Patent and Trademark office, there is a built in mechanism that the innovations of today get passed on to the developers who will iterate them tomorrow. That which doesn't change, doesn't grow. So if we have thousands of people contributing code to fix our products, they will get better based on the broad base of support. <br /></p><p>Working in the open was part of the original strategy AOL had when they open sourced
Netscape. If they could get other companies to build together with them, the collaborative work
of contributors outside the AOL payroll could contribute to the direct benefit of the browser team
inside AOL. Bugzilla was structured as a hierarchy of nodes, where a node owner could
prioritize external contributions to the code base and commit them to be included in the
derivative build which would be scheduled to be released as a new update package every few
months.</p><p>Module Owners, as they were called, would evaluate candidate fixes or new features against their
own list of items to triage in terms of product feature requests or complaints from their own
team. The main team that shipped each version was called Release Engineering. They cared less
about the individual features being worked on than the overall function of the broader software
package. So they would bundle up a version of the then-current software that they would call a
Nightly build, as there were builds being assembled each day as new bugs were up-leveled and
committed to the software tree. Release engineering would watch for conflicts between software
patches and annotate them in Bugzilla so that the various module owners could look for conflicts
that their code commits were causing in other portions of the code base.<br /></p><p></p><p></p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjcThb7htGtkXzeT7Upc3V3zBqPKkYUw1DvNZgD3Holr1vqCaf3lWxZPA2NKBbp8eKoIpBG6fppKTtfjEtieHqyccT4Ap7MglFHrC9Z9stuEvOlyVYFNZjMoKZo87du7_NXZLuUdRe48N0/s1907/PresentingAtAirMozilla.png" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1071" data-original-width="1907" height="180" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjcThb7htGtkXzeT7Upc3V3zBqPKkYUw1DvNZgD3Holr1vqCaf3lWxZPA2NKBbp8eKoIpBG6fppKTtfjEtieHqyccT4Ap7MglFHrC9Z9stuEvOlyVYFNZjMoKZo87du7_NXZLuUdRe48N0/w320-h180/PresentingAtAirMozilla.png" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span style="font-size: xx-small;"><b>Christopher presenting on launch of Social API in public Air Mozilla<br /></b></span></td></tr></tbody></table>While our browser is open source, and anyone can submit a candidate patch for it, we also need to show people how we work and why. So Mozilla has weekly meetings that are open to the public where we highlight all that we are working on. We call our communications platform Air Mozilla, and we host dozens of coding hackathons each week so developers can learn how to code. It doesn't matter what you do in open source if you don't publicize it! Asa Dotzler, our venerable host of these weekly calls once made a funny quip to me. Someone wanted to open source something to "give" to Mozilla. He said, that was indeed an admirable offer. But, he said, open sourcing isn't just about giving. It's about maintaining. So after a piece of software is made open you have to help the people who want to work with and iterate on it. If it's just open sourced, but not supported it is likely to die. That's why Webkit, Chromium and Gecko thrive. Apple, Google and Mozilla pay special care to the reviewing and iterating of the code that comes in as contributions from the outside.<br /><p></p><p>Over the years Mozilla has evolved the ways that people of all backgrounds and in all locations
can contribute. The simplest way to contribute is to be a user of the software and to make
comments on Support.mozilla.org if you ever have a problem, the user forums for Q&A about
the products. For those who are willing to roll up their sleeves and dive into some specifics of
product issues and their function, there is bugzilla.mozilla.org where we actually comment on
issues in the software and allocate resources to resolve them. The various internet policy and
advocacy issues are detailed on mozilla.org along with multiple ways people can get involved in
the mission. But even passive use of the products contributes to their refinement as the Mozilla
software has automatic program health utilities that can report system issues that are encountered
by many users, entirely anonymously. So just by surfing the web and reporting any crashes through the automated reporting helps Safari, Chrome and Firefox get better.</p><p><b>How Partners Help</b><br /><br />After Mozilla had acquired Pocket, they asked me to relocate to Germany to prospect for new partnerships across Europe. Germany has been the largest geographical distribution of Mozilla software outside of North America. Having studied German extensively in high school and college, I was excited to face this challenge. My wife and I relocated to Berlin, where we decided to live in the heart of the former East Berlin. We found an apartment close to the Fernsehturm (literally Far-seeing Tower, mundanely called TV tower in English) that rises triumphantly above the small isle where Berlin was founded over 800 years ago. </p><p>In Germany, more people download Firefox than all the people who use the browser included as the default on their personal computers. Why should this be? I was very enthusiastic to find out. Over months I interviewed people about their preferences, world view and fascinations. Put simplistically, many people told me that Germans really like choice and that they are skeptical of anything that is handed to them with the implication of convenience. So they may really like their Windows PC. But they want to get their software from somewhere else, not the company that powers their PC. This attitude isn’t an anti-authoritarian phobia that Microsoft has aspirations to consolidate their user data. It was rather an idea that where a choice is given, Germans relish that choice.</p><p>Beyond the stated preferences for self-determined future, I heard common references to the idea that our German customers liked the fact that Mozilla was non-profit and that the software was open-source and therefore entirely subject to peer-review for any additions or subtractions Mozilla decided to make to the software at a later date.</p><p>There are other factors beyond the casual commentary I’d heard that likely influenced the perspectives of the people I’d talked to. The percentage of the German population who subscribe to newspapers and journalistic magazines is higher than many countries in Europe and far higher than North America. I suspect that media coverage of internet-based malware vulnerabilities is heightened in Germany to the extent that many of the web participants there take special care about their digital hygiene. Specifically the German ministry for security in information technology (Abbreviated as “BSI” as abbreviation for Bundesamt für Sicherheit in der Informationstechnik more at https://www.bsi.bund.de) has been very articulate in publicizing its insights about malware/phishing risks, which are then amplified by the German press. This is not to say that any web browser these days is particularly better or worse than another. (Edge and Chrome are based on Blink/Chromium, Telekom Browser and Firefox is based on Gecko, Safari and dozens of mobile apps are based on Webkit. All are modern browsers with active bug fixing, zero-day response teams and are open source.) But in a marketplace where there is heightened attention on all aspects of the industry from shareholders to business models, we can expect a tremendous diversity of user choice to be expressed. The result of user choice in such an arena isn’t something that is swayed by a majority view, nor the company with the best marketing, nor the company with the best operating system bundling. Absolutely everything is put to a vote every day. </p><p>As an American, I would say that I’m open to promotions. I’d be inclined to take offers on face value. I may even be more fickle and transitory in my decisions as a consumer on the internet. But from what I have heard from my discussions with my customers, partners and end users in Germany, I’d say they are the opposite. Generalizing a bit too far perhaps, I believe that the German audience is discerning, reluctant to shift behaviors, wary of any offer or promotion and more likely to go by word-of-mouth or journalistic recommendations than the typical American user. </p><p>Mozilla had tremendous momentum in Germany prior to my arrival of course. Looking back across the decades, word of mouth around the emergence of Mozilla from Netscape may have played a partial role, along with endorsements from the BSI for their open-source and privacy reputation. But thereafter, leading internet service providers, Deutsche Telekom, 1&1 and leading portal web.de started recommending Gecko/Firefox as an alternative to IE and Safari. This somewhat underscores the value of partnerships in Germany. I wouldn’t say that these companies needed to be particularly rewarded in order to offer choices to their own customers. If they had said that their services were only available in IE and Safari, many of their customers might have been skeptical and might have switched away. But the fact that they offered to support their users in IE and Firefox, led to a lot of people going with the “underdog” non-profit instead of the browser that had been served to them as a bundled part of their operating system.<br />Deutsche Telekom’s version of the Gecko browser didn’t even leverage the Netscape, nor Firefox brand. For them, a skinned version of Gecko, combined with certain features they added onto the open source browser was enough to get the user to want to download the customized alternative. </p><p>Partnerships are great for brand-halo if they confer trust where there is otherwise no established relationship. But even when there is no inherent brand value, trust can still be earned. When I moved to Berlin, there were a couple of websites experimenting with Mozilla’s “Save to Pocket” button. By the time I left Berlin, there were 16 major publishers which integrated our tools to recommend content to Firefox users. My tactics in promoting the tools was never incentivized with commercial relationship. Rather it was by open and transparent communication of the benefits of the product. It was not by my efficacy as a salesman that Pocket earned its reputation in Germany. I knew from my lessons in the culture that excessive sales approach wouldn’t have worked anyway. But through transparency and by proving tremendous reliability, over time, I and my team could win trust. Trust and transparency is the basis of everything in the internet. As I found in Japan and Germany, diligent effort on behalf of the user via their service providers can win the day for a middleware company, be it a search engine or a browser.</p><p>As a business developer, it has been very easy for me to talk about Mozilla’s software. The parties on the other side of the table could always take the software I was pitching and run with it using their own brand if they’d wanted to. It was nice to have a pitch that ended with a punch line of “succeed with us and help us enhance the product jointly, or do it yourself on our code and succeed without us.” There has never been a need for Mozilla to be defensive with its innovations because it was set up for generosity. </p><p>As many companies have benefited by forking Mozilla code as have sought partnerships directly. Beyond business interest that is tied into brands, budgets and marketing strategies, Mozilla does a far greater good beyond its own scope of interest. Open source developers and their sponsors are achieving a broader good that is beyond the interests of any of them individually. By giving open source and maintaining an active dialog in the open about its future, we are giving the inspiration to developers and web-creators across the world to make the next generation of web stronger and more adaptive to the needs of the market in a way that is beyond vested interests and shareholder benefit. We are collaboratively building a platform that is mutable and extensible, to be continued through the visions and inspirations of the community we inspire. <br /></p><p><b>Passing the baton</b><br />
It has been amazing to watch Netscape morph from my favorite browser when I started my journey with the internet 25 years ago to becoming a platform that inspires hundreds of millions of people today. And just as the internet is a system of discreet nodes that can stay resilient and independent from other nodes, so the open source ecosystem itself is now a plurality of collaborative (though competitive) entities who jointly defend the network against vulnerabilities never needing to rely solely on one entity. As it was envisioned for Netscape’s future two decades ago, so it is now across a broad pool of international developers who are focusing on the future of the utility and flexibility of the web. <br /><br />Some might wonder, “Wouldn’t it be great if everyone just used the same software?” It might seem that that would address a lot of the site compatibility problems web developers run into. But think back to the concept of the competitive ecosystem the governments are so keen on. Competition is cumbersome and inefficient to a great extent. But the more people who are picking up the code and trying something new and innovative with it, the better ultimately the result for the broader consumer base. <br /><br />Mozilla was spawned out of the desire to create exactly that open and competitive ecosystem that would sustain the open internet beyond the influence of any single player or contributor. (Beyond even themselves.) Now that there are several competing open source initiatives, we have less to worry about any single one of them. As the internet itself was designed as a series of independent nodes that could function with or without others in the network, so is it with the competitive ecosystem of software developers fostering what we now have of this amazing internet. Mozilla has seemed to be somewhat of a diaspora. It’s former contributors are now spread across almost all technology firms around the world.<br /><br /></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhajrlFjVGgOU3g9bpK3rgrtiheRTIL1K6n0nSUeoaB5CJ2LyKoEuUrdRkwqcrvw7kW1c7-mOXtR9duD_bsN1F1Y0Q9c_Ps01x6zjMGsYA036Ykok3eKTY9cpYW12u91hyphenhyphenzo__TPY9xsSA/s2048/RedwoodFairyRing.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="2048" data-original-width="1536" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhajrlFjVGgOU3g9bpK3rgrtiheRTIL1K6n0nSUeoaB5CJ2LyKoEuUrdRkwqcrvw7kW1c7-mOXtR9duD_bsN1F1Y0Q9c_Ps01x6zjMGsYA036Ykok3eKTY9cpYW12u91hyphenhyphenzo__TPY9xsSA/w300-h400/RedwoodFairyRing.jpg" width="300" /></a></div>Sometimes I go to the redwood forests arount San Francisco bay area. There, you’ll often see trees growing in a circle called a "fairy ring." Redwoods can reproduce by creating burls that protrude from their trunks. If these fall to the ground, they become another tree. So the rings of Redwood trees that one sees is just the echo of a center. Mozilla never aspired to be the largest player. (From what I heard at least.) It was to be an open source example of its ideals, which were its burls. It is a lab, a Petri dish for experimentation and a reference point that expresses the developers' ideals in code. <p></p><p>Looking back over the decades of my career, it has been awe inspiring to see what lasts and what doesn’t. Some of the deals I’ve worked on have spanned a decade. But that’s somewhat rare in this industry. The best we can aspire to is to contribute some small part that makes the broader tools and market better for the furtherance of a competitive and open ecosystem that allows new entrants to contribute what they may to advance the work of the previous generation.<br /></p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj4jkYiNyl-728C8sN6fUBAYET5wMXn2EKIXlwlGQ1E9VILxb8fKXkeiORxNXlLbRhXyFyyJ6HlXyJQb59cQ-MIWFPStnyJEyHfXbVyU0CeL3QkhcMnaBXS9VQ9zM3epyKFbRs1hA7XiYs/s1557/Monument.jpeg" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1377" data-original-width="1557" height="290" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj4jkYiNyl-728C8sN6fUBAYET5wMXn2EKIXlwlGQ1E9VILxb8fKXkeiORxNXlLbRhXyFyyJ6HlXyJQb59cQ-MIWFPStnyJEyHfXbVyU0CeL3QkhcMnaBXS9VQ9zM3epyKFbRs1hA7XiYs/w328-h290/Monument.jpeg" width="328" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span style="font-size: xx-small;"><b>The Firefox Monument on Embaracadero<br /></b></span></td></tr></tbody></table>When Mozilla opened its first San Francisco office, they decided to put a monument to Firefox on the main road near the western span of the Bay Bridge. This monument has a large 3-dimensional globe representing the Earth with an orange fox sheltering the Earth. Mozilla inscribed the names of all its employees and contributors along the sides of the monument with a bold inscription “Doing good is part of our code.” Over my years in the internet industry I’d received various awards with my name printed on it that I could put on my home shelf to inspire memories and pride. But I’d never had my name placed on a public monument like this before. Walking past it always makes me feel proud for my association with this amazing group of people.<br /><br />2020 brings a global viral pandemic that has taken millions of lives. As governments shut down their local economies to stem the spread of the pandemic, all communities are being impacted in sundry ways. No part of our world is untouched by this. Mozilla, along with thousands of other businesses, is feeling the impact and has had to reduce its staff in order to ensure that it can persist into the future to stay true to the values we all support.<br />
<br />
It is always compelling and rewarding to see your efforts reflected in the success of others. It’s my time now to pass on the baton with pride, knowing the race will go on. The wonderful thing about open source is that it can live on beyond just what one company or a team of individuals can contribute to it. I’ve seen time and time again how companies unrelated to Mozilla can pick up the code and run with it, saving their developers time and effort that would otherwise have to be spent building from scratch. AOL had cast 1000 seeds when they gave up Netscape to the community. It has spawned an industry of amazing competitive collaboration without needing to be dreamed up by legislation of governments.<br /><br />There is a parable in Japan about the knife that gets sharper with use rather than dulling. Open source is the closest to that allegory that I’ve seen. It’s been fascinating to be a part of it! Open source is a tool that is honed by the earnest endeavor of thousands of people sharing their creativity toward a common goal. And because of its transparency, it cannot be maliciously encumbered without the community being able to see and react.<br /><br />Thank you to the teams of Mozillians who’ve inspired me from the beginning! <br /><p></p>ncubeeighthttp://www.blogger.com/profile/07874174268038500927noreply@blogger.com0tag:blogger.com,1999:blog-4288746931605915378.post-2417715486262037462019-05-05T18:49:00.000-07:002022-08-22T16:08:27.344-07:00An Author-Optimized Social Network Approach<div class="“p1">
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjldwSUjYBmYW4bwfJESUKMlAXrQox6twljYdJpOD5E3tPRBQzzhauQ_VlyiAYl0vPELtNNo19qPjhEQqmnuUGvOChYhppOgg1eQU__eAntkR4MNST4B141LeZbAkyY4yCQYjq7oL_pudI/s1600/Sciam+May+2019.png" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1348" data-original-width="1232" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjldwSUjYBmYW4bwfJESUKMlAXrQox6twljYdJpOD5E3tPRBQzzhauQ_VlyiAYl0vPELtNNo19qPjhEQqmnuUGvOChYhppOgg1eQU__eAntkR4MNST4B141LeZbAkyY4yCQYjq7oL_pudI/s200/Sciam+May+2019.png" width="182" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Sciam Art credit: jaybendt.com/</td></tr>
</tbody></table>
In this month’s edition of Scientific American magazine, <a href="https://www.soonishpodcast.org/the-host" target="_blank">Wade Roush</a> comments on social networks' potential deleterious impact on emotional well-being. (<a href="https://www.scientificamerican.com/magazine/sa/2019/05-01/" target="_blank">Scientific American May 2019</a>: Turning Off the Emotion Pump) He prompts, "Are there better social technologies than Facebook?" and cites previous attempts such as now-defunct <a href="https://en.wikipedia.org/wiki/Path_(social_network)" target="_blank">Path</a> and still struggling <a href="https://en.wikipedia.org/wiki/Diaspora_(social_network)" target="_blank">Diaspora</a> as potential promising developments. I don’t wish to detract from the contemporary concerns about notification overload and privacy leaks. But I’d like to highlight the positive side of social platforms for spurring creative collaboration and suggest an approach to potentially expand the positive impacts they can facilitate in the future. I think the answer to his question is: More diversity of platforms and better utilities needed.
<br />
<br />
In our current era, everyone is a participant, in some way, in the authorship of the web. That's a profound and positive thing. We are all enfranchised in a way that previously most were not. As an advocate for the power of the internet for advancing creative expression, I believe the benefits we've gained by this online enfranchisement should not be overshadowed by aforementioned bumps along the road. We need more advancement, perhaps in a different way than has been achieved in most mainstream social platforms to date. Perhaps it is just the utilization that needs to shift, more than the tools themselves. But as a product-focused person, I think some design factors could shape this change we'd need to see to have social networks be a positive force in everybody's lives.
<br />
<br />
When Facebook turned away from "the Facebook Wall", its earliest iteration, I was fascinated by this innovation. It was no longer a bunch of different profile destinations interlinked by notifications of what people said about each other. It became an atomized webpage that looked different to everyone who saw it, depending on the quality of contributions of the linked users. The outcome was a mixed bag because the range of experiences of each visitor were so different. Some people saw amazing things, from active creators/contributors they'd linked to. Some people saw the boredom of a stagnant or overly-narrow pool of peer contributors reflected back to them. Whatever your opinion of the content of Facebook, Twitter and Reddit, as subscription services they provide tremendous utility in today's web. They are far superior to the <a href="https://en.wikipedia.org/wiki/Webring" target="_blank">web-rings</a> and <a href="https://en.wikipedia.org/wiki/DMOZ" target="_blank">Open Directory Project</a> of the 1990s, as they are reader-driven rather than author/editor driven.
<br />
<br />
The experimental approach I'm going to suggest for advancement of next-generation social networks should probably happen outside the established platforms. For when <a href="https://www.theatlantic.com/technology/archive/2014/06/everything-we-know-about-facebooks-secret-mood-manipulation-experiment/373648/" target="_blank">experimentation</a> is done within these services it can jeopardize the perceived user control and trust that attracted their users in the first place.
<br />
<br />
In a brainstorm with an entrepreneur, named Lisa, she pointed out that the most engaging and involved collaborative discussions she'd seen had taken place in <a href="https://www.ravelry.com/" target="_blank">Ravelry</a> and <a href="https://secondlife.com/" target="_blank">Second Life</a>. Knitting and creating 3D art takes an amazing amount of time investment. She posited that it may be this invested time that leads to the quality of the personal interactions that happen on such platforms. It may actually be the casualness of engagement on conventional public forums that makes those interactions more haphazard, impersonal and less constructive or considerate. Our brainstorm spread to how might more such platforms emerge to spur ever greater realization of new authorship, artistry and collaboration. We focused not on volume of people nor velocity of engagement, but rather greatest <i>individual contribution</i>.
<br />
<br />
The focus (raison d'être) of a platform tends to skew the nature of the behaviors on it and can hamper or facilitate the individual creation or art represented based on the constraints of the platform interface. (For instance Blogger, Wordpress and Medium are great for long form essays. Twitter, Instagram and Reddit excel as forums for sharing observations about other works or references.) If one were to frame a platform objective on the maximum volume of individual contribution or artistry and less on the interactions, you'd get a different nature of network. And across a network of networks, it would be possible to observe what components of a platform contribute best to the unfettered artistry of the individual contributors among them.
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPaziH-Z6hWBHRC5AHj2FnC6Rfy05Sv2esxgnHU6ngGzRBcTrsNI5W2GzJZ31bSTJrnqMiCdXaFubmMRRsibahcY_89yf8Yr2g-ziqZk2XAbeSQ6T6552k0BoOBtN58muTwI-kXV3v32c/s1600/Omikoshi.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="900" data-original-width="1600" height="180" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPaziH-Z6hWBHRC5AHj2FnC6Rfy05Sv2esxgnHU6ngGzRBcTrsNI5W2GzJZ31bSTJrnqMiCdXaFubmMRRsibahcY_89yf8Yr2g-ziqZk2XAbeSQ6T6552k0BoOBtN58muTwI-kXV3v32c/s320/Omikoshi.jpg" width="320" /></a></div>
I am going to refer to this platform concept as "Mikoshi", because it reminds me of the Japanese portable shrines of the same name, pictured at right. In festival parades, dozens of people heft a one-ton shrine atop their shoulders. The bobbing of the shrine is supposed to bring good luck to the participants and onlookers. The time I participated in a mikoshi parade, I found it to be exhausting effort, fun as it was. The thing that stuck out to me was that that whole group is focused toward one end. There were no detractors.
<br />
<br />
Metaphorically, I see the mikoshi act of revelry as somewhat similar to the collaborative creative artistry sharing that Lisa was pointing out. In Lisa's example, there was a barrier to entry and a shared intent in the group. You had to be a knitter or a 3D artist to have a seat at the table. Why would hurdles create the improved quality of engagement and discourse? Presumably, if you're at that table you want to see others succeed and create more! There is a certain amount of credibility and respect the community gives contributors based on the table-stakes of participation that got them there. This is the same with most other effort-intensive sharing platforms, like <a href="https://www.mixcloud.com/" target="_blank">Mixcloud</a> and <a href="https://soundcloud.com/" target="_blank">Soundcloud</a>, where I contribute. The work of others inspires us to increase our level of commitment and quality as well. The shared direction, the furtherance of art, propels ever more art by all participants. It virtuously improves in a cycle. This drives greater complexity, quality and retention with time.
<br />
<br />
To achieve a pure utility of greatest contributor creation would be a different process than creating a tool optimized purely for volume or velocity of engagement. Lisa and I posited an evolving biological style of product "mutation" that might create a proliferating organic process, driven by participant contribution and automated selection of attributes observed across the most healthy offshoot networks. Maximum individual authorship should be the leading selective pressure for Mikoshi to work. This is not to say that essays are better than aphorisms because of their length. But the goal to be incentivized by a creativity-inspiring ecosystem should be one where the individuals participating feel empowered to create to the maximum extent. There are other tools designed for optimizing velocity and visibility, but those elements could be detrimental to individual participation or group dynamics.
<br />
<br />
To give over control to contribution-driven optimization as an end, it would Mikoshi would need to be a modular system akin to the <a href="https://en.wikipedia.org/wiki/WordPress" target="_blank">Wordpress</a> platform of <a href="https://automattic.com/" target="_blank">Automattic</a>. But platform mutation would have to be achieved agnostic of author self-promotion. The optimizing mutation of Mikoshi would need to be outside of the influence of content creators' drive for self promotion. This is similar to the way that "<a href="https://patents.google.com/patent/US6285999B1/en" target="_blank">Pagerank</a>" listened to interlinking of non-affiliated web publishers to drive its anti-spam filter, rather than the publishers' own attempts to promote themselves. Visibility and promulgation of new Mikoshi offshoots should be delegated to a different promotion-agnostic algorithm entirely, one looking at the health of a community of active authors in other preceding Mikoshi groups. Evolutionary adaptation is driven by what ends up dying. But Mikoshi would be driven by what previously thrived.<br />
<br />
I don't think Mikoshi should be a single tool, but an approach to building many different web properties. It's centered around planned redundancy and planned end-of-life for non-productive forks of Mikoshi. Any single Mikoshi offshoot could exist indefinitely. But ideally, certain of them would thrive and attract greater engagement and offshoots.<br />
<br />
The successive alterations of Mikoshi would be enabled by its capability to fork, like open source projects such as <a href="https://en.wikipedia.org/wiki/Linux" target="_blank">Linux</a> or <a href="https://en.wikipedia.org/wiki/Gecko_(software)" target="_blank">Gecko</a> do. As successive deployments are customized and distributed, the most useful elements of the underlying architecture can be notated with telemetry to suggest optimizations to other Mikoshi forks that may not have certain specific tools. This quasi-organic
process, with feedback on the overall contribution "health" of the ecosystem represented by participant contribution,
could then suggest attributes for viable offshoot networks to come. (I'm framing this akin to a browser's extensions,
or a Wordpress template's themes and plugins which offer certain optional expansions to pages using past templates of other developers.) The end products of Mikoshi are multitudinous and not constrained. Similar to Wordpress, attributes to be included in any future iteration are at the discretion of the communities maintaining them. <br />
<br />
Of course Facebook and Reddit could facilitate this. Yet, "roll your own platform" doesn't fit their business models particularly. Mozilla, manages several purpose-built social networks for their communities. (<a href="http://bugzilla.mozilla.org/" target="_blank">Bugzilla</a> and <a href="https://mozillians.org/" target="_blank">Mozillians</a> internally, and the former <a href="https://webmaker.org/" target="_blank">Webmaker</a> and new <a href="https://blog.mozvr.com/introducing-hubs-a-new-way-to-get-together-online/" target="_blank">Hubs</a> for web enthusiasts) But Mikoshi doesn't particularly fit their mission or business model either. I believe <a href="http://automattic.com/" target="_blank">Automattic</a> is better positioned to go after this opportunity, as it already powers 1/3 of global websites, and has competencies in massively-scaled hosting of web pages with social components.
<br />
<br />
I know from my own personal explorations on dozens of web publishing and media platforms that they have each, in different ways, facilitated and drawn out different aspects of my own creativity. I've seen many of these platforms die off. It wasn't that those old platforms didn't have great utility or value to their users. Most of them were just not designed to evolve. They were essentially too rigid, or encountered political problems within the organizations that hosted them. As the old Ani Difranco song "<a href="https://www.youtube.com/watch?v=a9elTQix5oQ" target="_blank"><i>Buildings and Bridges</i></a>" points out, "What doesn't bend breaks." (Caution that lyrics contain some potentially objectionable language.) The web of tomorrow may need a new manner of collaborative social network that is able to weather the internal and external pressures that threaten them. Designing an adaptive platform like Mikoshi may accomplish this. </div>
ncubeeighthttp://www.blogger.com/profile/07874174268038500927noreply@blogger.com0tag:blogger.com,1999:blog-4288746931605915378.post-19583777837548603302019-04-14T09:19:00.010-07:002021-01-22T10:13:12.946-08:00My 20 years of webTwenty years ago I resigned from my former job at a financial news wire to pursue a career in San Francisco. We were transitioning our news service (Jiji Press, a Japanese wire service similar to Reuters) to being a web-based news site. I had followed the rise and fall of Netscape and the Department of Justice anti-trust case on Microsoft's bundling of IE with Windows. But what clinched it for me was a Congressional testimony of the Federal Reserve Chairman (the US central bank) about his inability to forecast the potential growth of the Internet.<br />
<br />
Working in the Japanese press at the time gave me a keen interest in international trade. Prime Minister Hashimoto negotiated with United States Trade Representative Mickey Cantor to enhance trade relations and reduce protectionist tariffs that the countries used to artificially subsidize domestic industries. Japan was the second largest global economy at the time. I realized that if I was going to play a role in international trade it was probably going to be in Japan or on the west coast of the US.<br />
I decided that because Silicon Valley was the location where much of the industry growth in internet technology was happening, that I had to relocate there if I wanted to engage in this industry. So I packed up all my belongings and moved to San Francisco to start my new career.<br />
<br />
At the time, there were hundreds of small agencies that would build websites for companies seeking to establish or expand their internet presence. I worked with one of these agencies to build Japanese versions of clients' English websites. My goal was to focus my work on businesses seeking international expansion.<br />
<br />
At the time, <span class="_5yl5">I met a search engine called LookSmart, which aspired to offer business-to-business search engines to major portals</span>. (Business-to-Business is often abbreviated B2B and is a tactic of supporting companies that have their own direct consumers, called business-to-consumer, which is abbreviated B2C.) Their model was similar to Yahoo.com, but instead of trying to get everyone to visit one website directly, they wanted to distribute the search infrastructure to other companies, combining the aggregate resources needed to support hundreds of companies into one single platform that was customized on demand for those other portals.<br />
<br />
At the time LookSmart had only English language web search. So I proposed launching their first foreign language search engine and entering the Japanese market to compete with Yahoo!'s largest established user base outside the US. Looksmart's President had strong confidence in my proposal and expanded her team to include a Japanese division to invest in the Japanese market launch. After we delivered our first version of the search engine, Microsoft's MSN licensed it to power their Japanese portal and Looksmart expanded their offerings to include B2B search services for Latin America and Europe.<br />
<br />
I moved to Tokyo, where I networked with the other major portals of Japan to power their web search as well. Because at the time Yahoo! Japan wasn't offering such a service, a dozen companies signed up to use our search engine. Once the combined reach of Looksmart Japan rivaled that of the destination website of Yahoo! Japan, our management brokered a deal for LookSmart Japan to join Yahoo! Japan. (I didn't negotiate that deal by the way. Corporate mergers and acquisitions tend to happen at the board level.)<br />
<br />
By this time Google was freshly independent of Yahoo! exclusive contract to provide what they called "algorithmic backfill"
for the Yahoo! Directory service that Jerry Yang and David Filo had pioneered at Stanford
University. Google started a B2C portal and started offering of their own B2B publishing service by acquiring Yahoo! partner Applied Semantics, giving them the ability to put Google ads into every webpage on the internet without needing users to conduct searches anymore. Yahoo!, fearing competition from Google in B2B search, acquired Inktomi, Altavista, Overture, and Fast search engines, three of which were leading B2B search companies. At this point Yahoo!'s Overture division hired me to work on market launches across Asia Pacific beyond Japan. <br />
<br />
With Yahoo! I had excellent experiences negotiating search contracts with companies in Japan, Korea, China, Australia, India and Brazil before moving into their Corporate Partnerships team to focus on the US search distribution partners.<br />
<br />
Then in 2007 Apple launched their first iPhone. Yahoo! had been operating a lightweight mobile search engine for html that was optimized for being shown on mobile phones. One of my projects in Japan had been to introduce Yahoo!'s mobile search platform as an expansion to the Overture platform. However, with the ability for the iPhone to actually show full web pages, the market was obviously going to shift.<br />
<br />
I and several of my colleagues became captivated by the potential to develop specifically for the iPhone ecosystem. So I resigned from Yahoo! to launch my own company, ncubeeight. Similar to the work I had been doing at LookSmart and prior, we focused on companies that had already launched on the desktop internet that were now seeking to expand to the mobile internet ecosystem.<br />
<br />
Being a developer in a nascent ecosystem was fascinating. But it's much more complex than the open internet because discovery of content on the phone depends on going through a marketplace, which is something like a business directory. Apple and Google knew there were great business models of being a discovery gateway for this specific type of content. Going "direct to consumer" is an amazing challenge of marketing on small-screen devices. And gaining visibility in Apple iTunes and Google Play is even more challenging a marketing problem than publicizing your services on the desktop Internet. <br />
<br />
Next I joined Mozilla to work on the Firefox platform partnerships. It has been fascinating working with this team, which originated from the Netscape browser in the 1990's and transformed into an open-source non-profit focusing on the advancement of internet technology in <i>conjunction,</i> rather than solely in competition, with Netscape's former competitors.<br />
<br />
What is interesting from the outside perspective is most likely that companies that used to compete against each other for engagement (by which I mean your attention) are now unified in the idea of working together to enhance the ecosystem of the web. Google, Microsoft, Mozilla and Apple now all embrace open source for the development of their web rendering engines. Now these companies are beholden to an ecosystem of developers who create end-user experiences as well as the underlying platforms that each company provides as custodians of the ecosystem. The combined goals of a broad collaborative ecosystem are more important and impactful than any single platform or company. A side note: Amazon is active in the wings here, basing their software on spin-off code from Google's Android open source software. Also, after their mobile phone platform faltered, they started focusing on a space where they could completely pioneer a new web-interface, voice. (More on that in a separate post.)<br />
<br />
When I first came to the web, much of what it was made up of was static html. Over the past decade, web pages shifted to dynamically assembled pages and content feeds determined by individual user customizations. This is a fascinating transition that I witnessed while at Yahoo! which has been the subject of many books. (My favorite being Sarah Lacy's <a href="https://www.amazon.com/Once-Youre-Lucky-Twice-Good-ebook/dp/B00139ZHGY/" target="_blank">Once You're Lucky, Twice You're Good</a>.)<br />
<br />
Sometimes in reflective moments, one things back to what one's own personal legacy will be. In this industry, dramatic shifts happen every three months. Websites and services I used to enjoy tremendously 10 or 20 years ago have long since been acquired, shut down or pivoted into something new. So what's going to exist that you could look back on after 100 years? Probably very little except for the content that has been created by website developers themselves. It is the diversity of web content accessible that brings us everyday to the shores of the world wide web.<br />
<br />
There is a service called the <a href="https://web.archive.org/" target="_blank">Internet Archive</a> that registers historical
versions of web pages. I wonder what the current
web will look like from the future perspective of this current era of dynamically-customized feeds
that differ each page-load based on the user viewing them. I imagine an alien landing on Earth searching the history of our species, surfing back into time of the history of the Internet Archive's "Wayback Machine." I imagine they'll see a dramatic drop-off in content that was published in static form after 2010. The current decade will seem spare of anything of note that isn't exported to static html.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjj-iYnIqKu0TstVD9usgS8VHlckpaQWTT0cZ-z2R8azbm1epS_0RWL00K7xz8SPvMxVpzDTuPgOueozkuA7A6FQ-W1Y1QsTLjl34_qrlN5trE12I13nDNrEXkqM3qqRb1lTh-Ju4oQk04/s1600/Rhythmatism.png" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="779" data-original-width="557" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjj-iYnIqKu0TstVD9usgS8VHlckpaQWTT0cZ-z2R8azbm1epS_0RWL00K7xz8SPvMxVpzDTuPgOueozkuA7A6FQ-W1Y1QsTLjl34_qrlN5trE12I13nDNrEXkqM3qqRb1lTh-Ju4oQk04/s320/Rhythmatism.png" width="228" /></a>The amazing thing about the Internet is the creativity it brings out of the people who engage with it. Back when I started telling the story of the web to people, I realized I needed to have my own web page. So I needed to figure out what I wanted to amplify to the world. Because I admired folk percussion that I'd seen while I was living in Japan, I decided to make my <a href="http://www.rhythmatism.com/" target="_blank">website</a> about the drums of the world. I used a web editor called Geocities to create this web page you see at right. I decided to leave it in the original 1999 Geocities template design for posterity's sake. Since then my drum pursuits have expanded to include various other web projects including a <a href="https://www.youtube.com/user/rhythmatism" target="_blank">YouTube channel</a> dedicated to traditional folk percussion. A flickr channel dedicated to <a href="https://www.flickr.com/photos/36837209@N00/" target="_blank">drum photos</a>. Subsequently, I launched a <a href="https://soundcloud.com/rhythmatism" target="_blank">Soundcloud channel</a> and a Mixcloud <a href="https://www.mixcloud.com/rhythmatist/" target="_blank">DJ channel</a> for sharing music I'd composed or discovered over the decades.<br />
<br />
The funny thing is, when I created this website, people found me who I never would have met or found otherwise. I got emails from people around the globe who were interested in identifying drums they'd found. Even Cirque de Soleil wrote me asking for advice on drums they should use in their performances!<br />
<br />
Since I'd opened the curtains on my music exploration, I started traveling around to regions of the world that had unique percussion styles. What had started as a small web development project became a broader crusade in my life, taking me to various remote corners of the world I never would have traveled to otherwise. And naturally, this spawned a <a href="http://leapingaroundtheworld.com/" target="_blank">new website</a> with another <a href="https://www.youtube.com/channel/UCQe49ggu6eFqv5sCg1IxfbA/playlists" target="_blank">Youtube channel</a> dedicated to travel videos.<br />
<br />
The web is an amazing place where we can express ourselves, discover and broaden our passions and of course connect to others across the continents. <br />
<br />
When I first decided to leave the journalism industry, it was because I believed the industry itself was inherently about waiting for <i>other people</i> to do or say interesting things. In the industry I pursued, the audience was waiting for me do to <i>that </i>interesting thing myself. The Internet is tremendously valuable as a medium. It has been an amazing 20 years watching it evolve. I'm very proud to have had a small part in its story. I'm riveted to see where it goes in the next two decades! And I'm even more riveted to see where I go, with its help.<br />
<br />
On the web, the journey you start seldom ends where you thought it would go! <br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />ncubeeighthttp://www.blogger.com/profile/07874174268038500927noreply@blogger.com0tag:blogger.com,1999:blog-4288746931605915378.post-65797997987845561622019-01-13T22:38:00.000-08:002019-04-19T15:52:59.423-07:00How a speech-based internet will change our perceptions <table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; text-align: right;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgkxAuChyphenhyphendoW39z7TKBEJJ7sf1TljuwAgD2pK6zE-_8KSFCMia2Pqh4kjWjqjypz2GvJ8hbfle6FL8n6yPbYt1fBNaOH1sQTL0AJx8hvedrJPzqmKLJ09rvNRhFbgqTyZYbgSG9K2gM2O8/s1600/Beowulf_Cotton_MS_Vitellius_A_XV_f._132r.jpg" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgkxAuChyphenhyphendoW39z7TKBEJJ7sf1TljuwAgD2pK6zE-_8KSFCMia2Pqh4kjWjqjypz2GvJ8hbfle6FL8n6yPbYt1fBNaOH1sQTL0AJx8hvedrJPzqmKLJ09rvNRhFbgqTyZYbgSG9K2gM2O8/s320/Beowulf_Cotton_MS_Vitellius_A_XV_f._132r.jpg" width="190" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">https://en.wikipedia.org/wiki/Beowulf</td></tr>
</tbody></table>
<div class="p1">
<b>A long time ago I remember reading <a href="http://stevenpinker.com/" target="_blank">Stephen Pinker</a> discussing the evolution of language. I had read <a href="https://en.wikipedia.org/wiki/Beowulf" target="_blank">Beowulf</a>, <a href="https://en.wikipedia.org/wiki/Geoffrey_Chaucer" target="_blank">Chaucer</a> and <a href="https://en.wikipedia.org/wiki/William_Shakespeare" target="_blank">Shakespeare</a>, so I was quite interested in these linguistic adaptations over time. Language shifts rapidly through the ages, to the point that even English of 500 years ago sounds foreign to us now. His thesis in the piece was about how language is going to shift toward the Chinese pronunciation of it. Essentially, the majority of speakers will determine the rules of the language’s direction. There are more Chinese in the world than native English speakers, so as they adopt and adapt the language, more of us will speak like the greater factions of our language’s custodians. The future speakers of English, will determine its course. By force of "majority rules", language will go in the direction of its greatest use, which will be the Pangea of the global populace seeking common linguistic currency with others of foreign tongues. Just as the US dollar is an “exchange currency” standard at present between foreign economies, English is the shortest path between any two ESL speakers, no matter which background.</b></div>
<div class="p2">
<br /></div>
<div class="p1">
<b>Subsequently, I heard these concepts reiterated in a <a href="https://www.scientificamerican.com/podcasts/" target="_blank">Scientific American podcast.</a> The concept there being that English, when spoken by those who learned it as a second language, is easier for other speakers to understand than native-spoken English. British, Indian, Irish, Aussie, New Zealand and American English are relics in a shift, very fast, away from all of them. As much as we appreciate each, they are all toast. Corners will be cut, idiomatic usage will be lost, as the fastest path to information conveyance determines that path that language takes in its evolution. English will continue to be a mutt language flavored by those who adopt and co-opt it. Ultimately meaning that no matter what the original language was, the common use of it will be the rules of the future. So we can say goodbye to grammar as native speakers know it. There is a greater shift happening than our traditions. And we must brace as this evolution takes us with it to a linguistic future determined by others.</b></div>
<div class="p2">
<br /></div>
<div class="p1">
<b>I’m a person who has greatly appreciated idiomatic and aphoristic usage of English. So I’m one of those, now old codgers, who cringes at the gradual degradation of language. But I’m listening to an evolution in process, a shift toward a language of broader and greater utility. So the cringes I feel, are reactions to the time-saving adaptations of our language as it becomes something greater than it has been in the past. Brits likely thought/felt the same as their linguistic empire expanded. Now is just a slightly stranger shift.</b></div>
<div class="p3">
<b></b><br /></div>
<div class="p1">
<b>This evening I was in the kitchen, and I decided to ask <a href="http://alexa.amazon.com/" target="_blank">Amazon Alexa</a> to play some Led Zeppelin. This was a band that used to exist in the 1970’s era during which I grew up. I knew their entire corpus very well. So when I started hearing one of my favorite songs, I knew this was <i>not</i> what I had asked for. It was a good rendering for sure, but it was not Robert Plant singing. Puzzled, I asked Alexa who was playing. She responded “Lez Zeppelin”. This was a new band to me. A very good cover band I admit. (You can read about them here: </b><span class="s1"><b><a href="http://www.lezzeppelin.com/">http://www.lezzeppelin.com/</a>)</b></span></div>
<div class="p3">
<b>But why hadn't Alexa wanted to respond to my initial request? Was it because Atlantic Records hadn't licensed Led Zeppelin's actual catalog for Amazon Prime subscribers?</b></div>
<div class="p3">
<br /></div>
<div class="p1">
<b>Two things struck me. First, we aren’t going to be tailoring our English to Chinese ESL common speech patterns as Mr. Pinker predicted. We’re probably also going to be shifting our speech patterns to what Alexa, Siri, Cortana and Google Home can actually understand. They are the new ESL vector that we hadn't anticipated a decade ago. It is their use of English that will become conventional, as English is already the de facto language of computing, and therefore our language is now the slave to code.</b></div>
<div class="p3">
<b></b><br /></div>
<div class="p1">
<b>What this means for that band (that used to be called Zeppelin) is that such entity will no longer be discoverable. In the future, if people say “Led Zeppelin” to Alexa, she’ll respond with Lez Zeppelin (the rights-available version of the band formerly known as "Led Zeppelin"). Give humanity 100 years or so, and the idea of a band called Led Zeppelin will seem strange to folk. Five generations removed, nobody will care who the original author was. The "rights" holder will be irrelevant. The only thing that will matter in 100 years is what the bot suggests.</b></div>
<div class="p1">
<b><br /></b></div>
<div class="p1">
<b>Our language isn't ours. It is the path to the convenient. In bot speak, names are approximate and rights (ignoring the stalwart protectors) are meaningless. Our concepts of trademarks, rights ownership, etc. are going to be steam-rolled by other factors, other "agents" acting at the user's behest. The language and the needs of the spontaneous are immediate!</b></div>
<style type="text/css">
p.p1 {margin: 0.0px 0.0px 2.0px 0.0px; font: 14.0px 'Helvetica Neue'; color: #454545}
p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px 'Helvetica Neue'; color: #454545; min-height: 14.0px}
p.p3 {margin: 0.0px 0.0px 2.0px 0.0px; font: 14.0px 'Helvetica Neue'; color: #454545; min-height: 17.0px}
span.s1 {color: #e4af0a}
</style>ncubeeighthttp://www.blogger.com/profile/07874174268038500927noreply@blogger.com0tag:blogger.com,1999:blog-4288746931605915378.post-64722965219447269202018-12-22T17:45:00.000-08:002019-04-19T15:59:22.601-07:00<span style="background-color: #444444;"><span style="background-color: #444444;"></span><span style="background-color: #666666;"><span style="color: white;"><span style="background-color: #444444;"></span><br /></span></span>
</span><br />
<div dir="ltr" id="docs-internal-guid-d59ee135-556e-7cb4-4fe2-3d1b661a59eb" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<div class="separator" style="clear: both; text-align: center;">
<span style="background-color: #666666;"><span style="color: white;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhL4oeOVr8gpgwodRiU3IbPKPZMgER7Ra9QN8yrVsrKoNXompJf2dCZfwh2M2DR41xEC-kUFSvHxdUoTfq5xMTYs3grrI70KsQLxi-mbG0Q3iVWpzC64Uv9F9UkFZw4xgigu8lAEed76So/s1600/Earthlight.png" imageanchor="1" style="background-color: #444444; clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhL4oeOVr8gpgwodRiU3IbPKPZMgER7Ra9QN8yrVsrKoNXompJf2dCZfwh2M2DR41xEC-kUFSvHxdUoTfq5xMTYs3grrI70KsQLxi-mbG0Q3iVWpzC64Uv9F9UkFZw4xgigu8lAEed76So/s320/Earthlight.png" width="320" /></a></span></span></div>
<span style="background-color: #666666;"><span style="background-color: #444444; color: white;"><span style="font-family: "arial"; font-size: 14.6667px; font-style: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;">At last year’s Game Developers Conference I had the chance to experience new immersive video environments that are being created by game developers releasing titles for the new Oculus and HTC Vive and Google Daydream platforms. One developer at the conference, Opaque Mulitimedia, demonstrated "<a href="https://www.youtube.com/watch?v=9bWz8fbkZXI" target="_blank">Earthlight</a>" which gave the participant an opportunity to crawl on the outside of the International Space Station as the earth rotated below. In the simulation, a Microsoft Kinect sensor was following the position of my hands. But what I saw in the visor was that my hands were enclosed in an astronaut’s suit. The visual experience was so compelling that when my hands missed the rungs of the ladder I felt a palpable sense of urgency because the environment was so realistically depicted. (The space station was rendered as a scale model of the actual space station using the "Unreal" game physics engine.) The experience was so far beyond what I’d experienced a decade ago with the crowd-sourced simulated environments like Second Life, where artists </span><a href="https://www.youtube.com/watch?v=5xUrscmIj9I" style="text-decoration: none;"><span style="font-family: "arial"; font-size: 14.6667px; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: underline; vertical-align: baseline;">create 3D worlds in a server-hosted environment </span></a><span style="font-family: "arial"; font-size: 14.6667px; font-style: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;">that other people could visit as avatars. </span></span></span></div>
<div dir="ltr" id="docs-internal-guid-d59ee135-556e-7cb4-4fe2-3d1b661a59eb" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: #666666;"><span style="background-color: #444444; color: white;"><br /></span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: #666666;"><span style="background-color: #444444; color: white;"><span style="font-family: "arial"; font-size: 14.6667px; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;">Since that time I’ve seen some fascinating demonstrations at Mozilla’s Virtual Reality developer events. I’ve had the chance to witness a 360 degree video of a skydive, used the WoofbertVR application to visit real art gallery collections displayed in a simulated art gallery, spectated a simulated launch and lunar landing of Apollo 11, and browsed 360 photography depicting dozens of fascinating destinations around the globe. This is quite a compelling and satisfying way to experience visual splendor depicted spatially. With the <a href="http://www.nytimes.com/marketing/nytvr/index.html" target="_blank">New York Times</a> and </span><a href="http://www.roadtovr.com/imax-investing-50-million-create-new-level-premium-high-quality-vr-content/?utm_source=Road+to+VR+Daily+News+Roundup&utm_campaign=1247b27f94-RtoVR_RSS_Daily_Newsletter&utm_medium=email&utm_term=0_e2e394ad33-1247b27f94-168221269" style="text-decoration: none;"><span style="font-family: "arial"; font-size: 14.6667px; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: underline; vertical-align: baseline;">iMax</span></a><span style="font-family: "arial"; font-size: 14.6667px; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;"> now entering the industry, we can anticipate an incredible surfeit of media content to take us to places in the world we might never have a chance to go.</span></span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: #666666;"><span style="background-color: #444444; color: white;"><br /></span></span></div>
<div class="separator" style="clear: both; text-align: center;">
<span style="background-color: #666666;"><span style="color: white;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcCtVZKj8UvUTyBMVnRAZ_KWwH0sFldebyFfX6iWn8rnryOQoAb51dBuB11GiQ3Z2KEWLPvcdoIKasLlE15Yx3OnrYOA6rOfJ2c830a7w8kSRBemBQYVlEcemeudrG95qyAz4XsUA2Uwc/s1600/Mitchell%2540Mozfest.jpg" imageanchor="1" style="background-color: #444444; clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="150" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcCtVZKj8UvUTyBMVnRAZ_KWwH0sFldebyFfX6iWn8rnryOQoAb51dBuB11GiQ3Z2KEWLPvcdoIKasLlE15Yx3OnrYOA6rOfJ2c830a7w8kSRBemBQYVlEcemeudrG95qyAz4XsUA2Uwc/s200/Mitchell%2540Mozfest.jpg" width="200" /></a></span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: #666666;"><span style="color: white;"><span style="background-color: #444444; font-family: "arial"; font-size: 14.6667px; font-style: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;">Still the experiences of these simulated spaces seems very ethereal. Which brings me to another emerging field. At Mozilla Festival in London a few years ago, I had a chance to meet <a href="https://www.linkedin.com/in/yasuaki-kakehi-05689256" target="_blank">Yasuaki Kakehi </a>of Keio University in Japan, who was demonstrating a haptic feedback device called <a href="https://www.youtube.com/watch?v=eoztAbSlpfU" target="_blank">Techtile</a>. The Techtile was akin to a microphone for physical feedback that could then be transmitted over the web to another mirror device. When he put marbles in one cup, another person holding an empty cup could feel the rattle of the marbles as if the same marble impacts were happening on the sides of the empty cup held by the observer. The sense was so realistic, it was hard to believe that it was entirely synthesized and transmitted over the Internet. Subsequently, at the Consumer Electronics Show, I witnessed another of these haptic speakers. But this one conveyed the sense not by mirroring precise physical impacts, but by giving precisely timed pulses, which the holder could feel as an implied sense of force direction without the device actually moving the user's hand at all. It was a haptic illusion instead of a precise physical sensation.</span></span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: #666666;"><span style="color: white;"><span style="background-color: #444444; font-family: "arial"; font-size: 14.6667px; font-style: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;"><br /></span></span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: #666666;"><span style="color: white;"><span style="background-color: #444444; font-family: "arial"; font-size: 14.6667px; font-style: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;">As haptics work advances it has potential to impact common everyday experiences beyond the theoretical and experimental demonstrations I experienced. This year haptic devices are available in the new Honda cars on sale this year as <a href="http://automobiles.honda.com/sensing/" target="_blank">Road Departure Mitigation</a>, whereby steering wheels can simulate rumble strips on the sides of a lane just by sensing the painted lines on the pavement with cameras.</span></span></span></div>
<div class="separator" style="clear: both; text-align: center;">
<span style="background-color: #666666;"><span style="color: white;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiy_LMBsklCgiBfTmc_l1Wi2TVX3zSLYodqhe_o-W5XVLsAHqcXvvaRKG_aBvVJQzROXb_Om-anEnkxrv1uKfMt4Zkmsnnxvv_oQxVPOKoYR52hoLrcJaInsmyh9EGVrRzipCwBMJlT5z0/s1600/20090227_Emoti_400px.jpg" imageanchor="1" style="background-color: #444444; clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="223" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiy_LMBsklCgiBfTmc_l1Wi2TVX3zSLYodqhe_o-W5XVLsAHqcXvvaRKG_aBvVJQzROXb_Om-anEnkxrv1uKfMt4Zkmsnnxvv_oQxVPOKoYR52hoLrcJaInsmyh9EGVrRzipCwBMJlT5z0/s320/20090227_Emoti_400px.jpg" width="320" /></a></span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: #666666;"><span style="color: white;"><span style="background-color: #444444; font-family: "arial"; font-size: 14.6667px; font-style: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;">I am also very excited to see this field expand to include music. At Ryerson University's SMART lab, Dr. Maria Karam, Dr. Deborah Fels and Dr. Frank Russo applied the concepts of haptics and somatosensory depiction of music to people who didn't have the capability of appreciating music aurally. Their first product, called the <a href="https://www.youtube.com/watch?v=gA--cOs87p4&t=22s" target="_blank">Emoti-chair</a> breaks the frequency range of music to depict different audio qualities spatially to the listeners back. This is based on the concept that the human cochlea is essentially a large coiled surface upon which sounds of different frequencies resonate and are felt at different locations. While I don't have perfect pitch, I think having a spatial-perception of tonal scale would allow me to develop a cognitive sense of pitch correctness to compensate using a listening aid like this. Fortunately, Dr. Karam is advancing this work to introduce new form factors to the commercial market in coming years.</span></span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: #666666;"><span style="color: white;"><span style="background-color: #444444; font-family: "arial"; font-size: 14.6667px; font-style: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;"><br /></span></span></span></div>
<div class="separator" style="clear: both; text-align: center;">
<span style="background-color: #666666;"><span style="color: white;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhQ1lLpXkr9HQNmml2lhRDxZsl7xI_as06dsgcmUpS1kYnZ-Tnec1kFx_JPr3n6ph37olBWGE_WCrAU21SRdrHLqAvkb8NkLNIPkJ3rs0D3zl2tq9KwWzYpv9nbFfLOZy0TCGI_DBtN5aE/s1600/GendangBelek.jpg" imageanchor="1" style="background-color: #444444; clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="192" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhQ1lLpXkr9HQNmml2lhRDxZsl7xI_as06dsgcmUpS1kYnZ-Tnec1kFx_JPr3n6ph37olBWGE_WCrAU21SRdrHLqAvkb8NkLNIPkJ3rs0D3zl2tq9KwWzYpv9nbFfLOZy0TCGI_DBtN5aE/s320/GendangBelek.jpg" width="320" /></a></span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: #666666;"><span style="color: white;"><span style="background-color: #444444; font-family: "arial"; font-size: 14.6667px; font-style: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;">Over many years I have had the chance to study various forms of folk percussion. One of the most interesting drumming experiences I have had was a visit to Lombok, Indonesia where I had the chance to see a Gamelan performance in a small village along with the large Gendang Belek drums accompanying. The Gendang Belek is a large barrel drum worn with a strap that goes over the shoulders. When the drum is struck the reverberation is so fierce and powerful that it shakes the entire body, by resonating through the spine. I had an opportunity to study Japanese Taiko while living in Japan. The taiko, resonates in the listener by resonating in the chest. But the experience of bone-conduction through the spine is altogether a more intense way to experience rhythm.</span></span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: #444444;"><span style="background-color: #666666;"><span style="color: white;"><br /></span></span>
<span style="background-color: #666666;"><span style="color: white;"><span style="background-color: #444444; font-family: "arial"; font-size: 14.6667px; font-style: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;">Because
I am such an avid fan of physical experiences of music, I am frequently
gravitating toward bassey music. I tend to play it in a
sub-woofer-heavy car stereo, or seek out experiences to hear this music
in nightclub or festival performances where large speakers animate
the lower frequencies of music. I can imagine that if more people had the physical experience of drumming that I've had, instead of just the auditory experience of it, more people would enjoy making music themselves.</span></span></span></span><br />
<span style="background-color: #444444;"><span style="background-color: #666666;"><span style="color: white;"><br /></span></span>
<span style="background-color: #666666;"><span style="color: white;"><span style="font-family: "arial"; font-size: 14.6667px; font-style: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;">As more innovators like <a href="http://www.tadsinc.com/" target="_blank">TADs Inc.</a> (an offshoot of the Ryerson University project) bring physical experiences of music to the general consumer, I look forward to experiencing my music in greater depth.</span></span></span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: #666666;"><span style="color: white;"><span style="background-color: #444444; font-family: "arial"; font-size: 14.6667px; font-style: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;"><br /></span></span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: #666666;"><span style="color: white;"><span style="background-color: #444444; font-family: "arial"; font-size: 14.6667px; font-style: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;"><br /></span></span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: #666666;"><span style="color: white;"><span style="background-color: #444444; font-family: "arial"; font-size: 14.6667px; font-style: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;"><br /></span></span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: #666666;"><span style="color: white;"><span style="background-color: #444444; font-family: "arial"; font-size: 14.6667px; font-style: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;"><br /></span></span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: #666666;"><span style="color: white;"><span style="background-color: #444444; font-family: "arial"; font-size: 14.6667px; font-style: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;"><br /></span></span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: #666666;"><span style="color: white;"><span style="background-color: #444444; font-family: "arial"; font-size: 14.6667px; font-style: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;"><br /></span></span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: #666666;"><span style="color: white;"><span style="background-color: #444444; font-family: "arial"; font-size: 14.6667px; font-style: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;"><br /></span></span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: #666666;"><span style="color: white;"><span style="background-color: #444444; font-family: "arial"; font-size: 14.6667px; font-style: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;"><br /></span></span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: #666666;"><span style="color: white;"><span style="background-color: #444444; font-family: "arial"; font-size: 14.6667px; font-style: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;"><br /></span></span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: #666666;"><span style="color: white;"><span style="font-family: "arial"; font-size: 14.6667px; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline;"><br /></span></span></span></div>
ncubeeighthttp://www.blogger.com/profile/07874174268038500927noreply@blogger.com0tag:blogger.com,1999:blog-4288746931605915378.post-52625224916533316542016-04-14T23:30:00.008-07:002021-08-18T08:30:17.972-07:00<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh1uocWFYjLD2V8cahGwtPKyUQ9ePmV3GNaPYUTXPsYgZIb_27Zl3rj_VqB82bhzsLn93OVxSkxl0ysNPLkMV-h497tsjlE4MeOxsh6_8k2IoUoF1auC9b1Z0eRXqpFq6cyK9yKXZjZui4/s1600/ibm-cognea-liesl-capper.jpg" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="150" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh1uocWFYjLD2V8cahGwtPKyUQ9ePmV3GNaPYUTXPsYgZIb_27Zl3rj_VqB82bhzsLn93OVxSkxl0ysNPLkMV-h497tsjlE4MeOxsh6_8k2IoUoF1auC9b1Z0eRXqpFq6cyK9yKXZjZui4/s200/ibm-cognea-liesl-capper.jpg" width="200" /></a>Back in 2005-2006 my friend <a href="http://www.businessinsider.com/this-woman-sold-a-chat-bot-to-ibm-2014-5" target="_blank">Liesl</a> told me about the coming age of chat bots. I had a hard time <span id="goog_1683990497"></span><span id="goog_1683990498"></span>imagining how people would embrace products that simulated human voice communication but were less “intelligent”. She ended up building a company that allowed people to have polite automated service agents that you could program with a certain specific area of intelligence. Upon launch she found that people spent a lot more time conversing with the bots than they did with the average human service agent. I wondered if this was because it was harder to get questions answered, or if people just enjoyed the experience of conversing with the bots more than they enjoyed talking to people. Perhaps when we know the customer service agent is paid hourly, we don't gab in excess. But if it's chat bot you're talking to, we don't feel the need to be hasty?<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh0acoJK8Q-egodf3yqTExOKnvNjlKngHmjdt9mMKmmu12U6NDzO1DU-hujenZWokPQmy4xoO56xcQGwTZ5e9A2KgR9pbjtSKd1nvBG4cYQW3szG8ji-3KMESPnk25EQYaJI4csezJL78A/s1600/AmazonEcho.png" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh0acoJK8Q-egodf3yqTExOKnvNjlKngHmjdt9mMKmmu12U6NDzO1DU-hujenZWokPQmy4xoO56xcQGwTZ5e9A2KgR9pbjtSKd1nvBG4cYQW3szG8ji-3KMESPnk25EQYaJI4csezJL78A/s200/AmazonEcho.png" width="173" /></a>Fast forwarding over a decade later, IBM has acquired her company into the Watson group. During a dinner party we talked about <a href="http://www.amazon.com/echo" target="_blank">Amazon’s Echo </a>sitting on her porch. She and her husband would occasionally make DJ requests to “Alexa” (the name for Echo’s internal chat bot) as if it was a person attending the party. It was definitely seeming that the age of more intelligent bots is upon us. Most folk who have experimented with speech-input products of the last decade have become accustomed to talking to bots in a robotic monotone devoid of accent because of the somewhat random speech capture mistakes that early technology was burdened with. If the bots don't adapt to us, we go to them it seems, mimicking the 50's and 60's movies of how we've heard robotic voices depicted to us in science fiction films.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjtYHcadEPtfZzemRoWVQHQOIFVmRkAIp5ud2iTUVgO-AC4P75Z8kOby7sr_90toJUuYIExg2Bz6AI_Q0vr2xtlKVPBPzNsuViozVUNSRiuqUIaFOZKoQ8Ztdv5tbzSmFjV27HPUzjkSAc/s1600/Cortana-InputSettingsWin10.jpg" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjtYHcadEPtfZzemRoWVQHQOIFVmRkAIp5ud2iTUVgO-AC4P75Z8kOby7sr_90toJUuYIExg2Bz6AI_Q0vr2xtlKVPBPzNsuViozVUNSRiuqUIaFOZKoQ8Ztdv5tbzSmFjV27HPUzjkSAc/s200/Cortana-InputSettingsWin10.jpg" width="120" /></a>This month both Microsoft and Facebook have announced open bot APIs for their respective platforms. Microsoft’s platform for integration is an open source "<a href="https://dev.botframework.com/" target="_blank">Bot Framework</a>" that allows any web developer to re-purpose the code to inject new actions or content tools in the active discussion flow of their conversational chat bot called <a href="http://windows.microsoft.com/en-us/windows-10/getstarted-what-is-cortana" target="_blank">Cortana</a>, which is built into the search box of every Windows 10 operating system they license. They also demonstrated how the new bot framework allows their <a href="https://en.wikipedia.org/wiki/Skype" target="_blank">Skype</a> messenger to respond to queries intelligently if they have the right libraries loaded. Amazon refers to the app-sockets for the Echo platform as "skills", whereby you load a specific field of intelligence into the speech engine to allow Alexa to query the external sources you wish. I noticed that both Alexa team and Cortana team seem to be focusing on pizza ordering in both their product demos. But one day we'll be able to query beyond the basic necessities. In my early demonstration back in 2005 of the technology Liesl and <a href="https://www.youtube.com/watch?v=yGWwHfEqgic" target="_blank">Dr. Zakos</a> (her cofounder) built, they had their chat bot ingest all my blog writings about <a href="http://rhythmatism.com/" target="_blank">folk percussion</a>, then answer questions about certain topics that were in my personal blog. If a bot narrows a question to a subject matter, its answers can be uncannily accurate to the field!<br />
<br />
Facebook’s plan is to inject bot-intelligence into the main <a href="http://newsroom.fb.com/news/2016/04/messenger-platform-at-f8/" target="_blank">Facebook Messenger app</a>. Their announcements actually seem to follow quite closely the concept Microsoft announced of developers being able to port in new capabilities for the chatting engines of each platform vendor. It may be that both Microsoft and Facebook are planning for the social capabilities of their joint collaborations on the launch of <a href="https://www.oculus.com/" target="_blank">Oculus</a>, Facebook's immersive virtual environment of head-set based virtual world environments which run on Windows 10 machines.<br />
<br />
The outliers in this era of chat bot openness are the Apple <a href="http://www.apple.com/ios/siri/" target="_blank">Siri</a> and <a href="https://support.google.com/websearch/answer/2940021" target="_blank">Ok Google</a> speech tools that are like a centrally managed brain. (Siri may query the web using specific sources like <a href="http://www.wolframalpha.com/" target="_blank">Wolfram Alpha</a>, but most of the answers you get from either will be consistent with the answers others receive for similar questions.) The thing that I think is very elegant about the approaches Amazon, Microsoft and Facebook are taking is that they make the knowledge engine of the core platform extensible in ways that a single company could not. Also, the approach allows customers to personalize their experience of the platform by specifically adding new ported services to the tools. My interest here is that the speech platforms will become much more like the Internet of today where we are used to having very diverse content experiences based on our personal preferences.<br />
<br />
It is very exciting to see that speech is becoming a useful interface for interacting with computers. While the content of the web is already one of the knowledge ports of these speech tools, the open-APIs of Cortana, Alexa and Facebook Messenger will usher in a very exciting new means to create compelling internet experiences. My hope is that there is a bit of standardization so that a merchant like Domino's doesn't have to keep rebuilding their chat bot tools for each platform.<br />
<br />Each of these innovative companies is dealing with the hard questions of how to get us out of our stereotypes of robot behavior and get us back to acting like people again, returning to the main interface that humans have used for eons to interact with each other. Ideally the technology will fade into the background and we'll start acting normally again instead of staring at screens and tapping fingers.<br /><br />ncubeeighthttp://www.blogger.com/profile/07874174268038500927noreply@blogger.com0tag:blogger.com,1999:blog-4288746931605915378.post-62613120829881089222015-09-10T19:49:00.002-07:002015-09-10T19:49:58.823-07:00Bluetooth LE beacons and the coming hyper-local web of the physical world<br />
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfNfb25_ZQ_7TQjKO3XpJ_Ek6S8y5VUUEtMpEOvl2O9hAgw9QHRK-fdhblXZxUa-rcCNWiyOxoi8itWC-H4z_n738AxeCkm-KjGWTxjpINnSFwlyl2lyDdIa7u9pWwcY7rNB-XTVG7lok/s1600/PhilzTruck.jpg" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfNfb25_ZQ_7TQjKO3XpJ_Ek6S8y5VUUEtMpEOvl2O9hAgw9QHRK-fdhblXZxUa-rcCNWiyOxoi8itWC-H4z_n738AxeCkm-KjGWTxjpINnSFwlyl2lyDdIa7u9pWwcY7rNB-XTVG7lok/s320/PhilzTruck.jpg" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: xx-small;">Philz Coffee mobile single-serving brewing truck at San Francisco Marina</span></td></tr>
</tbody></table>
<div style="text-align: right;">
</div>
Recently, my wife and I were riding bikes around Fort Mason area on the
San Francisco peninsula. Lo-and-behold my wife sees someone with a
Philz coffee cup walk by. She says to herself, “Wait a tick! There’s no
Philz in this neighborhood!” San Franciscans are tribal about their preferred coffees. We typically know all the physical locations of our favorite roasters and brewers. My wife knows I’m a
Philz-devotee. So seeing a Philz cup outside of its natural habitat
caught her attention. Minutes later, we ran into the new Philz truck,
parked on Marina blvd. Booyah!<br />
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi4U89y1RQVhEOafIChjT4b4iHxyhWDqKfEBlr1U8B0UEOtAV9a8jdVKmMA-_Ox3KaRFVsiJxFDSQZmlcb-AmMREg7ypbHT6WeIq-cXPy6UfKioKMcIPVn8mI9Cf8mLrhS7_q1LlXPbgi4/s1600/Phil.jpg" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="135" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi4U89y1RQVhEOafIChjT4b4iHxyhWDqKfEBlr1U8B0UEOtAV9a8jdVKmMA-_Ox3KaRFVsiJxFDSQZmlcb-AmMREg7ypbHT6WeIq-cXPy6UfKioKMcIPVn8mI9Cf8mLrhS7_q1LlXPbgi4/s200/Phil.jpg" width="200" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: xx-small;">Phil Jaber in the Original Phiz Coffee Shop</span></td></tr>
</tbody></table>
<br />
This is the first time I had thought about the half-life of a coffee cup in the wild. The various coffee roasting factions demarcate their turf using the coffee cups they give visitors as a sort of viral advertising strategy. And the radius of inspiration lasts as long as it takes for a person to consume their beverage, which may be five minutes if a person is walking and drinking at a moderate pace. This is plenty of time for one customer to inspire Pavlovian thirst reactions in a dozen passersby.<br />
<br />
This brings me to the emerging tech trend of the season, the use of bluetooth beacons for transmitting location signals and web content. (See Apple <a href="https://support.apple.com/en-us/HT202880" target="_blank">iBeacon</a> and the Google <a href="https://github.com/google/eddystone" target="_blank">Eddystone</a> initiatives for the nitty gritty) We can assume that first applications of these tools will be marketing related like the coffee cups, sending signals that span from a few feet to fifty feet depending on intensity of the signal wavelength. But one can imagine a scenario where beacons of hundreds of varieties might talk to our wearable devices or phones, without intruding on our attention, in order to sift out topics, events and messages of specific interest to us personally. As a first step, something has to be written to be read.<br />
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhE22wjY9c3a7daaK0YJ1nqRPqvqkrhxHtSi_o63ekpWtv_4SpnqVeC8Yd8NtX4PckNH3Sm_LdIxQZ3DYMFXpFQ3SIuSM4nRzlqux7tZbUzwD9tMitc0ELwl78Asao7yy7g3NVdJENjfig/s1600/tweetie_nearby.jpg" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img alt="" border="0" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhE22wjY9c3a7daaK0YJ1nqRPqvqkrhxHtSi_o63ekpWtv_4SpnqVeC8Yd8NtX4PckNH3Sm_LdIxQZ3DYMFXpFQ3SIuSM4nRzlqux7tZbUzwD9tMitc0ELwl78Asao7yy7g3NVdJENjfig/s200/tweetie_nearby.jpg" title="Source: http://www.nydailynews.com/news/tweetie-2-takes-twitter-new-level-article-1.417122" width="131" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: xx-small;">Tweetie Nearby View</span></td></tr>
</tbody></table>
There have been some interesting initiatives around hyper-local web content discovery in Augmented Reality style applications. My favorites include Yelp <a href="http://www.wikihow.com/Use-the-Augmented-Reality-Monocle-on-the-Yelp-for-iPhone-App" target="_blank">Monocle</a> which spatially rendered restaurant reviews over the viewfinder of a phone's camera, Loren Brichter's <a href="http://www.nydailynews.com/news/tweetie-2-takes-twitter-new-level-article-1.417122" target="_blank">Tweetie</a> app which allowed users to point their phone in any direction within the user's proximity to see what was being tweeted there, and <a href="http://shopkick.com/" target="_blank">Shopkick</a> app that
sends audio signals to customers' phones when their phones are listening
for the high-pitched signals Shopkick transmitter sent, that are beyond human auditory range. All of these are app-specific signals. It becomes very interesting when these kinds of strategies are done in an open fashion that doesn't require a special app to consume it. The web itself is the best means to move this kind of use case forward. That is exactly what is happening with this new push to leverage bluetooth. And of course bluetooth signals decay rapidly over short distances. So they are only relevant to people nearby, for whom content can be tailored. <br />
<br />
Why is the idea of the decaying signal good? Think about the movie <a href="http://www.imdb.com/title/tt2883512/" target="_blank">Chef</a>, and the use case that the protagonist had to tweet their location and updates while they drove across the country. Doesn't make a whole lot of sense to use a global platform for a location-specific service does it? Great marketing film for Twitter, but a ridiculous premise. Chefs need to talk to their communities, not the world, when publicizing today's menu. And a web where everyone has to manually follow sources and manage inbound information meticulously is a web that will inundate our attention. When it comes to the things that can matter to us in the tangible world, we need it to speak to us when it's relevant and shut up at other times. Otherwise, the signal/utility of the web gets lost in the noise.<br />
<br />
Google's innovation with the "<a href="https://github.com/google/eddystone/tree/master/eddystone-url" target="_blank">Eddystone URL</a>" introduces the concept of the beacon being a web server. The URL a beacon transmits can utilize any modern browser to connect the user to a broad array of web content associated with the specific location without needing a custom application to read it. Every smart phone in existence can render and interact with web content published in http. <br />
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhaHKqr5eI2UBailMKlA4ZNJII5VagxPqEimy8E5mk_qIEByiublJ1Tg5UMJ1leaq2PcpqJOHFhcbXkARhpnMQpfUVBs4sKLI5G65yCJNYInnLEMww6V-X-oCCyKatqNseO55n0kbM2W9Y/s1600/EstimoteBeacon.jpg" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhaHKqr5eI2UBailMKlA4ZNJII5VagxPqEimy8E5mk_qIEByiublJ1Tg5UMJ1leaq2PcpqJOHFhcbXkARhpnMQpfUVBs4sKLI5G65yCJNYInnLEMww6V-X-oCCyKatqNseO55n0kbM2W9Y/s200/EstimoteBeacon.jpg" width="112" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: xx-small;">Admin view of Estimote beacon </span></td></tr>
</tbody></table>
Beacon developer <a href="http://estimote.com/" target="_blank">Estimote</a>
is joining the Eddystone initiative, soon to support the new URL
broadcasting as part of their existing line of bluetooth beacons. Their current SDKs allow for custom app developers to map locations and tailor apps specific to those locations. Once Eddystone URLs are integrated they will be readable by notification management tools like Google Now and probably soon custom scanners, mobile web browsers and lock-screen apps.<br />
<br />
Once Google exposes support of beacon recognition in Android, the adoption of bluetooth contextual beacons could become fairly mainstream in large metropolitan areas. (It will be even better if it's done in <a href="http://source.android.com/" target="_blank">Android Open Source Project</a> so that Android forked initiatives like Xiaomi and Kindle Fire can benefit from the innovations and efforts of "beacon publishers".) What this could do for our use of Internet tools in daily life is a great deal of simplification of daily tasks. We will no longer need to have an app specifically to check bus schedules or get restaurant reviews, make reservations, etc. Those scenarios will be able to happen on demand, as needed with very little hassle for us as users.<br />
<br />
In the coming years the companies that provide our phones, browsers and other communications tools will be innovating ways to surface and manage these content signals as they proliferate. So it is unlikely to be something many of us will need to manage actively. But very soon the earliest iterations of augmented reality apps will start to surface in our mobile devices in compelling new ways that will allow the physical environment around us to animate and inform us when we want it to. And it will be easy to ignore at all other times.<br />
<br />
One step beyond the mere receiving and sorting of signals is the concept that we might transmit our own signals to beacon receivers in our proximity one day. Imagine the concept of <a href="http://cyber.law.harvard.edu/projectvrm/Main_Page" target="_blank">Vendor Relationship Management</a>, popularized by Doc Searls, a means of us transmitting our preferences to the outside world and having information and services tailor themselves to us. In a world where we express our wants, needs, opinions digitally, the digital-physical world might in turn tailor messages to us without need for physical action. <br />
<br />
First step for this wave of innovation to be truly useful for us will be to have all the digital world's wealth of subliminal content available to us as needed, nearby. Second step will be the discovery/revealing in a manageable way. (This is already in process.) Third step will be the assertion of <i>preference</i> through the tools the OS, apps and browsers provide. I think this is the area that will benefit substantially from developer innovation.<br />
<br />
ncubeeighthttp://www.blogger.com/profile/07874174268038500927noreply@blogger.com0tag:blogger.com,1999:blog-4288746931605915378.post-88764823172037603882015-05-11T18:40:00.003-07:002015-05-11T18:42:49.565-07:00Mesh Networking for app delivery in Apple OSX and Windows GWX<br />
The upcoming release of Windows 10 operating system is exciting for a number of bold new technologies Microsoft plans to introduce, including the new Microsoft Edge browser and Cortana speech-recognition tools. This release is called GWX for "Get Windows 10" and will reach all Windows users from version 7 to 8.1. Particularly interesting to me is that it will be the first time Windows operating system pushes out software over mesh networks in a peer-to-peer (aka "P2P") model. <br />
<br />
Over a decade ago software tools for creating peer-to-peer and mesh networks proliferated as alternative approaches to bandwidth-intensive content delivery and task processing. Allowing networked devices to mesh and delegate tasks remotely between each other avoids the burden of one-to-one connections between a computer and a central hosting server. Through this process the originating host server can delegate tasks to other machines connected in the mesh and then turn its attention to other tasks while the function (be it a piece of content to be delivered/streamed or a calculation to be executed) cascades through the meshed devices where there is spare processing capacity.<br />
<br />
Offloading one-to-one tasks to mesh networks can unburden infrastructure that provides connectivity to all end users. So this is a general boon to the broader Internet infrastructure in terms of bandwidth availability. While the byte volume that reaches the end user is the same, the number of copies sent is fewer. (To picture this, consider a Netflix stream, which goes from a single server to a single computer, to a torrent stream that is served across a mesh over dozens of computers in the user's proximity.) <br />
<br />
Here are just a small list of initiatives that utilized mesh networking in the past:<br />
SETI-at home (deciphering radio signals in space for pattern interpretation across 1000s of dormant PCs and Macs), Electric Sheep (Collaborative sharing of fractal graphic animations with crowd-sourced feedback), Skype (social networking, telephony, prior to the Microsoft acquisition)<br />
Veoh (video streaming), Bit Torrent (file sharing), Napster (Music sharing), One Laptop per Child (Wifi connectivity in off-grid communities), Firechat (phones create a mesh over Bluetooth frequencies)<br />
<br />
Meshing is emerging in software delivery primarily because of the benefit it offers in eliminating burden to Apple and Microsoft in download fulfillment.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjNg7VVxvUaZOEpHfULjKLU3C8oOVpA47pq89rfxuwUUw9RP88FkA4FylcglX6ZCRMd8U26fKBv-6Z5Tfrr32zzh2S5e2myWEdYjiyhS4syZyaNyU-uBTTLvnZChffcU4_WciXFd6kTFgg/s1600/Peer-Software-Updating.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="191" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjNg7VVxvUaZOEpHfULjKLU3C8oOVpA47pq89rfxuwUUw9RP88FkA4FylcglX6ZCRMd8U26fKBv-6Z5Tfrr32zzh2S5e2myWEdYjiyhS4syZyaNyU-uBTTLvnZChffcU4_WciXFd6kTFgg/s1600/Peer-Software-Updating.png" width="320" /></a></div>
Apple's first introduction of this capability came in the Yosemite operating system update. Previously, software downloads were managed by laptop/desktop computers and pushed through USB to peripherals like iPods, iPhones and iPads. When these devices shifted from the hub and spoke model to be able to deliver updates directly over the air, two or more devices from a single wifi access point would make two or more different requests to the iTunes marketplace. With Apple's new networked permissions flow, one download can be shared between all household computers and all peripherals. It makes ecological sense to unburden the web from multiple versions of software going to the same person or household. It benefits Apple directly to send fewer copies of software and serves the user no less.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguQYZmsNi8jhQ5LrsDdi4sbcWw7Vd0APWXsf83bHVaqEJddzFFuqhggXlpDZUUJpVdN27ko5oMJRMDN8uy0l3OE9e58unXA73q07DJlC8dMAgm5WQwOih7lssLJDgsqrg29foK-yCok5w/s1600/UpdateOptions.PNG" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguQYZmsNi8jhQ5LrsDdi4sbcWw7Vd0APWXsf83bHVaqEJddzFFuqhggXlpDZUUJpVdN27ko5oMJRMDN8uy0l3OE9e58unXA73q07DJlC8dMAgm5WQwOih7lssLJDgsqrg29foK-yCok5w/s1600/UpdateOptions.PNG" unselectable="on" width="290" /></a><br />
Microsoft is going a step further with the upcoming Windows 10 release. Their version of the app distribution method over mesh allows you to fetch copies of the Windows updates not just from those sources who may be familiar to you in your own Wi-Fi network. Your computer may also decide to pull an update from some other unknown source on the broader Internet that is in your proximity.<br />
<br />
What I find very interesting about this is that Microsoft had previously been very restrictive about software distribution processes. Paid software products is their core business model after all. So to introduce a process to mesh Windows machines in a peering network for software delivery demonstrates that the issues around software piracy and rights management has largely been resolved. <br />
<br />
For more detail about the coming Windows 10 rollout, <a href="http://www.zdnet.com/article/get-windows-10-microsofts-hidden-roadmap-for-the-biggest-software-upgrade-in-history/" target="_blank">ZDNet</a> has a very good update. <br />
<div class="separator" style="border-image: none; clear: both; text-align: center;">
<br /></div>
<div class="separator" style="border-image: none; clear: both; text-align: center;">
<span id="goog_813312096"></span><span id="goog_813312097"><br /></span></div>
ncubeeighthttp://www.blogger.com/profile/07874174268038500927noreply@blogger.com0tag:blogger.com,1999:blog-4288746931605915378.post-50060104244601244092015-04-30T20:40:00.002-07:002015-05-01T16:38:21.925-07:00Calling Android users: Help Mozilla Map the World!<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhkTa4YR__XBi3JJI_RXznBAwuf-x3b3WCAiHYHS4ErUdh9DZokZHJjD-0Qh4esT9qlIYZe2js9TnAUiDRdq7iW1wX5ujFaAtT5JZLJEHvIRs64qQ3ZiWqeIoQ5ZA-bzdgJUaFLhqiBbEk/s1600/WifiLocation.PNG" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhkTa4YR__XBi3JJI_RXznBAwuf-x3b3WCAiHYHS4ErUdh9DZokZHJjD-0Qh4esT9qlIYZe2js9TnAUiDRdq7iW1wX5ujFaAtT5JZLJEHvIRs64qQ3ZiWqeIoQ5ZA-bzdgJUaFLhqiBbEk/s1600/WifiLocation.PNG" height="127" width="200" /></a>Many iPhone users may have wondered why Apple prompts them with a message saying “Location accuracy is improved when Wi-Fi is turned on” each time they choose to turn Wi-Fi off. Why does a phone that has GPS (Global Positioning Satellite) capability need to use Wi-Fi to determine it’s location? <br />
<br />
The reason is fairly simple. There are of course thousands of radio frequencies traveling through the
walls of buildings all around us. What makes Wi-Fi frequency (or even bluetooth) particularly useful
for location mapping is that the frequency travels a relatively short distance before it decays, due to how low energy the Wi-Fi wavelengths are. A combination of three or more Wi-Fi signals can be
used in a very small area by a phone to triangulate locations on a map
in the same manner that earthquake shockwave strengths can be used to triangulate
epicenters. Wi-Fi hubs don't need to transmit their locations to
be useful. Most are oblivious of their location. It is the phone's interpretations of their signal strength and inferred location that creates the value to the phone's internal mapping capabilities. No data that goes over the Wi-Fi frequency is relevant to using radio for triangulation. It is
merely the signal strength/weakness that makes it useful for
triangulation. (Most Wi-Fi hubs are password protected and the data sent over them is encrypted.) <br />
<br />
Being able to let phone users determine their own location is of keen interest to developers who can’t make location-based-services work without fairly precise location determinations. The developers don't want to track the users per se. They want the users to be able to self-determine location when they request a service at a precise location in space. (Say requesting a Lyft ride or checking in at a local eatery.) There are a broad range of businesses that try to help phones accurately orient themselves on maps. The data that each application developer uses may be different across a range of phones. Android, Windows and iPhones all have different data sources for this, which can make it frustrating to have consistency of app experience for many users, even when they’re all using the same basic application.<br />
<br />
At Mozilla, we think the best way to solve this problem is to create an open source solution. We are app developers ourselves and we want our users to have consistent quality of experience, along with all the websites that our users access using our browsers and phones. If we make location data accessible to developers, we should be able to help Internet users navigate their world more consistently. By doing it in an open source way, dozens of phone vendors and app developers can utilize this open data source without cumbersome and expensive contracts that are sometimes imposed by location service vendors. And as Mozilla we do this in a way that empowers users to make personal choice as to whether they wish to participate in data contribution or not.<br />
<br />
How can I help? There are two ways Firefox users can get involved. (Several ways that developers can help.) We have two applications for Android that have the capability to “stumble” Wi-Fi locations.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgi3fmLOubEwvq7W8M-j1QYs5C5ZCIqiMynHgD-yhBkCxrYtIGDAVDvdwpPO-pkP7EjLCwUL1IDCVCHAOBKNZjyDir6EFYGKncQ3dw2erYMuGefw_gohyphenhyphen1yoAc9NmAAongAof69GrW5Wu4/s1600/MozStumbler.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgi3fmLOubEwvq7W8M-j1QYs5C5ZCIqiMynHgD-yhBkCxrYtIGDAVDvdwpPO-pkP7EjLCwUL1IDCVCHAOBKNZjyDir6EFYGKncQ3dw2erYMuGefw_gohyphenhyphen1yoAc9NmAAongAof69GrW5Wu4/s1600/MozStumbler.png" height="200" width="117" /></a></div>
The first app is called “Mozilla Stumbler” and is available for free download in the Google Play store. <span id="goog_1203174031"></span><span id="goog_1203174032"></span>(https://play.google.com/store/apps/details?id=org.mozilla.mozstumbler) By opening MozStumbler and letting it collect radio frequencies around you, you are able to help the location database register those frequencies so that future users can determine their location. None of the data your Android phone contributes can be specifically tied to you. It’s collecting the ambient radio signals just for the purpose of determining map accuracy. To make it fun to use MozStumbler, we have also created a leaderboard for users to keep track of their contributions to the database. <br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhdAsNGZajIPPwH1vHWgnx5e8vTZk15pA5CyWamOE9MGiTzVsBISl6_PfijCgcNKvrC4-kQepo2c5td3Gun18z4QL4f05I3nELM9uhn3wFCDmrvJ1zZUj6EMQn0VaAwLx3GbuDpWUusMfM/s1600/ToolsMenu.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhdAsNGZajIPPwH1vHWgnx5e8vTZk15pA5CyWamOE9MGiTzVsBISl6_PfijCgcNKvrC4-kQepo2c5td3Gun18z4QL4f05I3nELM9uhn3wFCDmrvJ1zZUj6EMQn0VaAwLx3GbuDpWUusMfM/s1600/ToolsMenu.png" height="302" width="320" /></a></div>
<br />
Second app is our Firefox mobile browser that runs on the Android operating system. (If it becomes possible to stumble on other operating systems, I’ll post an update to this blog.) You need to take a couple of steps to enable background stumbling on your Firefox browser. Specifically, you have to opt-in to share location data to Mozilla. To do this, first download Firefox on your Android device. On the first run you should get a prompt on what data you want to share with Mozilla. If you bypassed that step, or installed Firefox a long time ago, here’s how to find the setting:<br />
<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgTKNRfAPTKeuyY96x0fQyDTso4mmd7oTIxPs8clwIoS6RoaHDUi7JDtP9QULt0q3ixQ9FJ8QvPGj79SAR86bPsPH1d3jNRN2KtXHqKFRecjoBIa8GZb61L95fk7kbdtO3I4N6zpxefCGE/s1600/MozillaOptions.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgTKNRfAPTKeuyY96x0fQyDTso4mmd7oTIxPs8clwIoS6RoaHDUi7JDtP9QULt0q3ixQ9FJ8QvPGj79SAR86bPsPH1d3jNRN2KtXHqKFRecjoBIa8GZb61L95fk7kbdtO3I4N6zpxefCGE/s1600/MozillaOptions.png" height="184" width="200" /></a></div>
1) Click on the three dots at the right side of the Firefox browser chrome then select "Settings" (Above image)<br />
<br />
2) Select Mozilla (Right image)<br />
<br />
Check the box
that says “Help Mozilla map the world! Share approximate Wi-Fi and
cellular location of your device to improve our geolocation services.” (Below image)<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgEEZAlpDzvj58Atd3DLiOJooklWcU7rgHosCYGFLwdwzjPBsGZiA-wwmWyDetsLx_pp0tdl8TtJyAWQJBRplJQp6lEWvdamBDFZd1qf8f-HlXvDrk9vo29LT3OMJOYJo8raAxP9YOs6zI/s1600/LocationServicesOptIn.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgEEZAlpDzvj58Atd3DLiOJooklWcU7rgHosCYGFLwdwzjPBsGZiA-wwmWyDetsLx_pp0tdl8TtJyAWQJBRplJQp6lEWvdamBDFZd1qf8f-HlXvDrk9vo29LT3OMJOYJo8raAxP9YOs6zI/s1600/LocationServicesOptIn.png" height="400" width="341" /></a><br />
If you ever want to change your settings, you can return to the settings of Firefox, or you can view your Android device's main settings menu on this path: Settings>Personal>Location which is the same place where you can see all the applications you've previously granted access to look up your physical location.<br />
<br />
The benefit of the data contributed is manifold:<br />
1) Firefox users on PCs (which do not have GPS sensors) will be able to determine their positions based on the frequency of the WiFi hotspots they use rather than having to continually require users to type in specific location requests. <br />
2) Apps on Firefox Operating System and websites that load in Firefox that use location services will perform more accurately and rapidly over time.<br />
3) Other developers who want to build mobile applications and browsers will be able to have affordable access to location service tools. So your contribution will foster the open source developer community.<br />
<br />
And in addition to the benefits above, my colleague <a href="https://mozillians.org/en-US/u/KaiRo/" target="_blank">Robert Kaiser</a> points out that even devices with GPS chips can benefit from getting Wi-Fi validation in the following way:<br />
<div class="post-message " data-role="message" dir="auto">
"1)
When trying to get a location via GPS, it takes some time until the chip actually has seen signals from enough satellites to determine a
location ("get a fix"). Scanning the visible wi-fi signals is faster
than that, so getting an initial location is faster that way (and who
wants to wait even half a minute until the phone can even start the
search for the nearest restaurant or cafe?).<br />
2) The location from
this wifi triangulation can be fed into the GPS system, which enables it
to know which satellites it roughly should expect to see and therefore
get a fix on those sooner (Firefox OS at least is doing that).<br />
3) In
cities or buildings, signals from GPS satellites get reflected or
absorbed by walls, often making the GPS position inaccurate or not being
able to get a fix at all - while you might still see enough wi-fi
signals to determine a position."</div>
<br />
Thank you for helping improve Mozilla Location Services.<br />
<br />
If you'd like to read more about Mozilla Location Services please visit:<br />
https://location.services.mozilla.com/<br />
To see how well our map currently covers your region, visit:<br />
https://location.services.mozilla.com/map#2/15.0/10.0<br />
If you are a developer, you can also integrate our open source code directly into your own app to enable your users to stumble for fun as well. Code is available here: https://github.com/mozilla/DemoStumbler<br />
For an in-depth write-up on the launch of the Mozilla Location Service please read Hanno's blog here: http://blog.hannosch.eu/2013/12/mozilla-location-service-what-why-and.html<br />
For a discussion of the issues on privacy management view Gervase's blog:<br />
http://blog.gerv.net/2013/10/location-services-and-privacy/ <br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />ncubeeighthttp://www.blogger.com/profile/07874174268038500927noreply@blogger.com0tag:blogger.com,1999:blog-4288746931605915378.post-17762181943602365952014-12-24T09:00:00.000-08:002014-12-25T14:56:30.148-08:00Launching crowd-funded volunteer developer projects with Mozilla<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhzjRZ0wIj_RtYtcRuaiZQuqsAPdlGNae4w9hvkmmX46Trt536jxMFCVHEl17BAJXrevlxMvMkKR-zFvwxxFo0lxS4b24L6f_Uz3IjPik39g6176M4jwXkzWEK7WjjTkvmexH6t_JFZBn4/s1600/fisl13.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhzjRZ0wIj_RtYtcRuaiZQuqsAPdlGNae4w9hvkmmX46Trt536jxMFCVHEl17BAJXrevlxMvMkKR-zFvwxxFo0lxS4b24L6f_Uz3IjPik39g6176M4jwXkzWEK7WjjTkvmexH6t_JFZBn4/s1600/fisl13.jpg" height="172" width="200" /></a></div>
I joined Mozilla as a staff contributor three years ago working on API partnerships, early Firefox Operating System content partnerships for our phones and identity management solutions for the web. I found Mozilla to be one of the most compelling work environments of my career. Beyond the prestige of working on a product that reaches 1 in 5 Internet users globally, I found the passion and inspiration of our community and coworkers infectious. <br />
<br />
A huge number of people who work on Firefox products and tools are volunteers. It’s amazing to be surrounded by people who work 100% based on the passion they have for their contribution to the web and its benefits to the global community. At the <a href="http://softwarelivre.org/fisl13" target="_blank">FISL 2013</a> conference in Brazil, I had my first experience working with the Mozilla Representatives and community volunteers.<br />
<br />
I bumped into an engineer, <a href="https://www.linkedin.com/profile/view?id=8793662" target="_blank">André Natal</a>, who wanted to create a speech-to-text engine for Firefox operating system. (This enables tools like voice-triggered web search and map navigation.) With a few connections and recommendations, he was on his way, ultimately releasing Firefox OS Marketplace’s first speech recognition app for the Brazilian market and going on to incorporate the capability into our core Firefox operating system. Another engineer, <a href="https://www.linkedin.com/profile/view?id=171190765" target="_blank">Fábio Magnoni</a>, helped us bug-fix our emergency dialer for the phones and networked our phone launch with dozens of publishers in Brazil. Both of these engineers had their own day jobs, but contributing to Mozilla and Firefox products was their passion.<br />
<br />
When I asked them why they worked on Firefox products, it was just their own excitement for the challenge and the experience of working with others on a common goal of facilitating the open developer environment that motivated them.<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjcg3kh7x1H6SstO8M1QuUAVRono1fXIRGnjGTfbyDkqesdcIdhyphenhyphent4O3nFvzceWI8V5BsHCWYiGRa6ChApEkI-fX9V5vmFCj8geG68ooWQZAifA60FcRKa_MPeSxP9IDXshoABDVzDJlk8/s1600/Cerros+Announcement.png" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img alt="Webmaker event 6619" border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjcg3kh7x1H6SstO8M1QuUAVRono1fXIRGnjGTfbyDkqesdcIdhyphenhyphent4O3nFvzceWI8V5BsHCWYiGRa6ChApEkI-fX9V5vmFCj8geG68ooWQZAifA60FcRKa_MPeSxP9IDXshoABDVzDJlk8/s1600/Cerros+Announcement.png" height="170" title="http://corozalwebtrainingcamp.tumblr.com/" width="200" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><a href="http://corozalwebtrainingcamp.tumblr.com/" target="_blank"><b>Webmaker Training Camp, Belize </b></a></td><td class="tr-caption" style="text-align: center;"><br /></td></tr>
</tbody></table>
<br />
This year I coordinated my first volunteer developer event. It will be a conference for young developers in the northern region of Corozal in Belize. This is the story of its inception.<br />
<br />
Last spring my wife and I took a trip to Belize on a quest to discover as many Mayan ruins as we could. A chance dinner at the Cerros Beach Resort became a new side-project for me, when the founders of the resort, Jenny and Bill Bellerjeau, heard about the Mozilla mission and invited us to host an event at their resort. He wanted the kids of his village to have a chance to learn about the Internet. A quick post on social media resulted in a ground-swell of interest among my coworkers and friends. <br />
<br />
Mozilla has a community of engaged evangelists called <a href="https://reps.mozilla.org/" target="_blank">Mozilla Representatives</a> who host "teach the web" events in their countries. Over 2000 are conducted annually around the world. (Ours is going to be <a href="https://events.webmaker.org/events/6619" target="_blank">Webmaker Event 6619</a>)<br />
<br />
Upon return to the US, I found that Mozilla had no local representatives in Belize. So we decided to hold a WebMaker event at Cerros Beach Resort, flying in four experienced Mozillians to lead a "teach the web" session. Four Mozillians from Costa Rica, Mexico, Canada and the UK volunteered to teach 30 kids over their winter holidays. Three of our US staff volunteered time to prepare the fundraising campaign and prepare donated phones and computers for the project.<br />
<br />
We used the crowd-funding platform Indiegogo to raise donations for the project and to keep in touch with our donors about the progress of our campaign. (Indiegogo gave us a dedicated partner page at https://www.indiegogo.com/partners/mozilla) We received 25 donations from our friends and connections, 11 donated computers, 30 phones acquired through Mozilla's partner <a href="http://www.zteusa.com/" target="_blank">ZTE</a>, 15 SIM cards donated by <a href="http://www.belizetelemedia.net/" target="_blank">Belize Telemedia</a>, and the real estate to host the event from <a href="http://cerrosbeachresort.com/" target="_blank">Cerros Beach Resort</a>.<br />
<br />
This week trainers are headed to Belize. They will teach students in the region how to make web sites, how to create phone apps and more esoteric topics that initially interested each of the trainers to get involved with web coding. (Python, site localization, database and web query syntax)<br />
<br />
It’s incredible to see a vision go from a simple brainstorm over dinner to a full week-long training event in a foreign country. Working with Mozillians like <a href="https://www.linkedin.com/profile/view?id=10327177" target="_blank">Andrea Wood</a>, <a href="https://www.linkedin.com/profile/view?id=13669504" target="_blank">Mike Poessy</a>, <a href="https://www.linkedin.com/profile/view?id=6213069" target="_blank">Kory Salsbury</a>, <a href="https://www.linkedin.com/profile/view?id=1297547" target="_blank">Shane Caraveo</a>, <a href="http://www.linkedin.com/in/matthewruttley" target="_blank">Matthew Ruttley</a> and <a href="https://www.linkedin.com/in/jegs87" target="_blank">Julio Gómez Sánchez</a> is part of what makes me proud to be a Mozillian.<br />
<br />
Profound thanks to <a href="http://cerrosbeachresort.com/about-us/" target="_blank">Bill and Jenny Bellerjeau</a> for their inspiration and generous offer to host this event and for all the participants, donors and volunteers for making this happen!<br />
<br />
We hope that others find the Indiegogo platform useful in funding their own developer projects in years to come.<br />
<br />
<br />
<br />
<br />
To hear the intros from each of the project contributors visit:
https://www.indiegogo.com/projects/corozal-web-training-camp#activity
We will have output from the training on the Tumblr page for the event:
http://corozalwebtrainingcamp.tumblr.com/ncubeeighthttp://www.blogger.com/profile/07874174268038500927noreply@blogger.com0tag:blogger.com,1999:blog-4288746931605915378.post-54909184867543220352014-09-23T21:23:00.001-07:002014-12-24T13:54:49.160-08:00The transition of web search - From open to fragmented and proprietary<div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;">
<span style="color: #f3f3f3; font-size: large;"><span style="font-family: Times,"Times New Roman",serif;"><br /></span></span></div>
<div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;">
<span style="color: #f3f3f3; font-size: large;"><span style="font-family: Times,"Times New Roman",serif;"><br /></span></span></div>
<div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;">
<span style="color: #f3f3f3; font-size: large;"><span style="font-family: Times,"Times New Roman",serif;"><span style="background-color: transparent; font-style: normal; font-variant: normal; font-weight: bold; text-decoration: none; vertical-align: baseline;">The beginning: open-web indexes</span></span></span></div>
<div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;">
<span style="color: #f3f3f3; font-size: large;"><span style="font-family: Times,"Times New Roman",serif;"><span style="background-color: transparent; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;">Web
indexing (also called crawling or spidering) was a means of creating a
private
index of published html documents on the web in the 1990s. Popular
search engines would use keyword position in the document, frequency of
keyword mention, font style and meta-tag data to determine a public
document’s
relevance to a given keyword query. This manner of indexing and searching was
the first wave of search engine mechanics. Crawlers could start at the
top node of any public domain and follow any link from the hosting
center to make a replica of the content that was being posted daily.
(DNS, the domain name lookup service managed by internet service
providers, was also a public resource for translating internet protocol
identities into legible domain names, broadcasting new site URLs daily.) The open web allowed multiple
companies to launch differentiated services indexing the same open web.
AOL, Yahoo!,
Altavista, HotBot, Lycos and Excite were all able to promote distinct services through their own portals and search interfaces and competition
thrived. Internet users, publishers and software companies benefited from this openness. </span></span></span></div>
<span style="color: #f3f3f3; font-size: large;"><span style="font-family: Times,"Times New Roman",serif;"><br /></span></span>
<br />
<div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;">
<span style="color: #f3f3f3; font-size: large;"><span style="font-family: Times,"Times New Roman",serif;"><span style="background-color: transparent; font-style: normal; font-variant: normal; font-weight: bold; text-decoration: none; vertical-align: baseline;">Early spam-evasion</span></span></span></div>
<div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;">
<span style="color: #f3f3f3; font-size: large;"><span style="font-family: Times,"Times New Roman",serif;"><span style="background-color: transparent; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;">With the emergence of search engine optimization (SEO), many of
these engines had problems with irrelevant content surfacing for
specific popular keywords because the source of the ranking was based on
what a
single publisher posted on its own domains about itself. This approach was obviously
vulnerable to abuse. Some publishers would
“keyword-stuff” their meta-tags or put footers of keywords in invisible
fonts, strategies meant to dupe users into visiting irrelevant pages to inflate traffic volumes and ad dollars through extra impression volume. Each portal used manual intervention models to
address SEO spam. Yahoo!
had it’s own editorial directory that served the top layer of results,
which prevented SEO spam from surfacing to the top of a results page. Microsoft, used LookSmart. AOL and Google used the Netscape Open Directory
Project (aka DMOZ) as human validation of site relevance to a specific
subject category. </span></span></span></div>
<div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;">
<span style="color: #f3f3f3; font-size: large;"><span style="font-family: Times,"Times New Roman",serif;"><br /></span></span>
<span style="color: #f3f3f3; font-size: large;"><span id="docs-internal-guid-221886c1-a8a1-7512-606f-ffced65120cf" style="background-color: transparent; font-family: 'Times New Roman'; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;">While Yahoo!’s directory was proprietary and LookSmart’s was licensed in a syndication model, DMOZ was provided as a free tool to any site that needed a search directory. It was curated by a group of volunteers who felt prestige from being category editors of different subject matter <span style="font-family: 'Times New Roman';">branches of the directory <span style="font-family: 'Times New Roman';">tree</span></span>. <span style="font-family: 'Times New Roman';">This model of h</span>uman attention as a "relevancy multiplier" <span style="font-family: 'Times New Roman';">for </span>the directories was leveraged by Google to pull ahead in the anti-spam competition. Larry Page included in his "Pagerank" algorithm a mechanism to determine how many web publishers inserted links to specific domains, and index<span style="font-family: 'Times New Roman';">ed those as subject matter authorities for the <span style="font-family: 'Times New Roman';">topic of the <span style="font-family: 'Times New Roman';"><span style="font-family: 'Times New Roman';">reference</span> links</span></span></span>. Links that publishers embedded to other pages were seen as <span style="font-family: 'Times New Roman';">an</span> indication of curatorial attention of the linking site that merited th<span style="font-family: 'Times New Roman';">e referenced</span> page being displayed higher in the results. The unique advantage of Pagerank was that it mostly ignored what publishers said about themselves and focused what others said about them<span style="font-family: 'Times New Roman';">, subverting the SEO spam practices of the era.</span> (T<span style="font-family: 'Times New Roman';">he human in<span style="font-family: 'Times New Roman';">t</span>e<span style="font-family: 'Times New Roman';">rvention</span></span> theme recurs later.) </span></span><br />
<span style="color: #f3f3f3; font-size: large;"><span style="font-family: Times,"Times New Roman",serif;"><br /></span></span>
<span style="color: #f3f3f3; font-size: large;"><span style="font-family: Times,"Times New Roman",serif;"><span style="background-color: transparent; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;">Pagerank peer-review
system, in contrast to the editor-review system of the directories, enabled Google to pull
ahead in relevance as the web grew in scale, thereby unseating Inktomi as the
algorithmic engine of the largest
portal at the time, Yahoo.com. Based on this success, Google
transitioned from a backfill-search provider, like Inktomi, Fast and
Looksmart to become a destination website like Altavista, Excite, Lycos and
HotBot. After winning AOL portal as a distribution partner, Google’s brand was
finally entrenched. (Google insisted on brand name "powered by" messaging that established brand familiarity unlike the other white label search providers.)</span></span></span></div>
<span style="color: #f3f3f3; font-size: large;"><span style="font-family: Times,"Times New Roman",serif;"><br /></span></span>
<br />
<div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;">
<span style="color: #f3f3f3; font-size: large;"><span style="font-family: Times,"Times New Roman",serif;"><span style="background-color: transparent; font-style: normal; font-variant: normal; font-weight: bold; text-decoration: none; vertical-align: baseline;">The emergence of paid search and subsequent market consolidation</span></span></span></div>
<div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;">
<span style="color: #f3f3f3; font-size: large;"><span style="font-family: Times,"Times New Roman",serif;"><span style="background-color: transparent; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;">The
growth in scale of the web through the early 2000’s required a
significant investment in infrastructure. It also forced the evolution
of the business models for existing search providers to subsidize this
growth. LookSmart, Inktomi and Overture
offered licensed search engines, in a hosted or feed-based model, which
consolidated thousands of small-scale sites using single domain lander pages. Unlike the destination-site search engines Altavista, Lycos, HotBot
and Excite, these distributed search engines could be integrated in to
the look and feel of the host site design. (Integrated look and feel was not supported by Google.) Based
on the consolidated market share that these companies developed,
pay-to-index (sponsored search) emerged. LookSmart’s and Inktomi’s
model was pay for inclusion in the index.
Goto/Overture was a bid for placement layer only on the top 3-5 positions above other search engine algorithmic results. Bid for placement yielded higher returns for Overture and enabled price competition between the hosted search providers that ultimately favored portals turning to the Overture sponsored model.</span></span></span><br />
<span style="color: #f3f3f3; font-size: large;"><br /></span>
<span style="color: #f3f3f3; font-size: large;"><span style="font-family: Times,"Times New Roman",serif;"><span style="background-color: transparent; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;">Google built its own version of the Overture style of paid search, resulting in a lawsuit over
Overture's US patent (USPTO patent number: 6,269,361). Yahoo!, then powered by Google, acquired
Overture, and
resolved the suit in a pre-IPO stock trade with Google. It then dropped
Google as an algorithmic backfill provider and used the Overture revenue to build it’s own in-house search
engine based on the acquired tools of Altavista, Inktomi and Fast
combined with its existing directory. Microsoft ultimately ended it’s dependency
on Overture and Inktomi to launch it’s own search engine, which it later
cross-licensed to Yahoo! in exchange for ad technology, sales collaboration and placement commitments. This technology struggle and pricing battle resulted in the market we have at present, with the market consolidated in two
major search engines with all portals using Bid-for-placement
advertising as the business model.</span></span></span></div>
<span style="color: #f3f3f3; font-size: large;"><span style="font-family: Times,"Times New Roman",serif;"><br /></span></span>
<br />
<div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;">
<span style="color: #f3f3f3; font-size: large;"><span style="font-family: Times,"Times New Roman",serif;"><span style="background-color: transparent; font-style: normal; font-variant: normal; font-weight: bold; text-decoration: none; vertical-align: baseline;">Shift of landscape to users as publishers (aka “Web 2.0”)</span></span></span></div>
<div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;">
<span style="color: #f3f3f3; font-size: large;"><span id="docs-internal-guid-221886c1-a8a9-c8bf-07bc-8d8335c81bc1" style="background-color: transparent; font-family: 'Times New Roman'; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;">With the boom of small-publishers, blogs and the emergence of social media, the publisher-centric web faced a significant challenge. Google’s Pagerank is after all a publisher-centric signal. Google and Bing indexing took weeks to surface new content in the web index so there was a significant lag from publication time to spidering by the crawler. A need for a more timely experience of the web was emerging. The fragmenting of publication platforms into different content types also presented a challenge. Some content was published behind login “walled-gardens” no longer accessible to web crawlers. Many actively blocked <span style="font-family: 'Times New Roman';">crawlers with "robot.txt" files or rotated their <span style="font-family: 'Times New Roman';">domains daily to ob<span style="font-family: 'Times New Roman';">fuscate new content from indexing. (<span style="font-family: 'Times New Roman';">New companie<span style="font-family: 'Times New Roman';">s were intentionally hiding their content from search engines because they feared the global dominance of US search <span style="font-family: 'Times New Roman';">engines.)</span></span></span> </span></span>S</span>ome publishers transitioned to dynamic pages, meaning that a web page was not a static indexable entity but an assembled collage of content sources served in a single view at time of page load. Finally, the seat of relevancy for new content couldn’t depend on an algorithm that took weeks to discover new signals of relevancy by waiting for publishers to cross-link to them. A Blogspot or Wordpress author proved to be just as likely to be an authority on a granular topics as a<span style="font-family: 'Times New Roman';">n</span> established site that had high popularity rating in Alexa and Comscore, popular site ranking services. For any trending real-time topic, it might be difficult for a conventional search engine to come up with a means to designate rank-authority from an algorithmic perspective. New search technologies were needed.</span></span></div>
<span style="color: #f3f3f3; font-size: large;"><span style="font-family: Times,"Times New Roman",serif;"><br /></span></span>
<br />
<div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;">
<span style="color: #f3f3f3; font-size: large;"><span style="font-family: Times,"Times New Roman",serif;"><span style="background-color: transparent; font-style: normal; font-variant: normal; font-weight: bold; text-decoration: none; vertical-align: baseline;">User-centric data and indexing</span></span></span></div>
<div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;">
<span style="color: #f3f3f3; font-size: large;"><span style="font-family: Times,"Times New Roman",serif;"><span style="background-color: transparent; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;">With
the user-centric publishing trend (along with encrypted data exchange
mechanisms for easy server networking to transfer data within logged states) many companies that hosted user-publishing such as Blogger, Wordpress, Facebook, MySpace, Twitter, enabled open data
access via web protocols like RSS, ATOM and RestAPI feeds. And
because an account-authentication token could be passed at the time of
query, it allowed users to have a highly personalized view of the
real-time web. Bing, Google, Yahoo! experimented with their own
personalized search services during this period. </span></span></span></div>
<span style="color: #f3f3f3; font-size: large;"><span style="font-family: Times,"Times New Roman",serif;"><br /></span></span>
<br />
<div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;">
<span style="color: #f3f3f3; font-size: large;"><span style="font-family: Times,"Times New Roman",serif;"><span style="background-color: transparent; font-style: normal; font-variant: normal; font-weight: bold; text-decoration: none; vertical-align: baseline;">Real-time Search concepts</span></span></span></div>
<div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;">
<span style="color: #f3f3f3; font-size: large;"><span id="docs-internal-guid-221886c1-a8b7-dda8-a333-97e6edb41550" style="background-color: transparent; font-family: 'Times New Roman'; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;">This open-data trend allowed a new algorithmic search engines to emerge that did not need to build complex offline indexes of the former large-publisher world. In fact, the search engines accessing social mentions of mainstream publisher content became a secondary signal of user interest and attention that mapped the “Web 1.0” web as effectively as Pagerank, but made it discoverable faster than traditional search engines. The unique advantage of these new approaches was that they were able to combine interest breadth signals across the global audience regarding the subject matter, together with its impact amplitude and duration on that audience over time. (Visualize social media impact signals acr<span style="font-family: 'Times New Roman';">oss these three dimensions and you'll <span style="font-family: 'Times New Roman';"><span style="font-family: 'Times New Roman';">grasp the significan<span style="font-family: 'Times New Roman';">ce of this in algorithmic analysis terms.) </span></span></span></span>This is because the the nature of microblog sites enabled the capture<span style="font-family: 'Times New Roman';"> of</span> reactions at the moment of discovery by readers, across disparate feedback channels. </span></span><br />
<span style="color: #f3f3f3; font-size: large;"><br /></span>
<span style="color: #f3f3f3; font-size: large;"><span id="docs-internal-guid-221886c1-a8b7-dda8-a333-97e6edb41550" style="background-color: transparent; font-family: 'Times New Roman'; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;">Because of the brevity and immediacy of sharing that was taking off on Twitter specifically, and because of its <span style="font-family: 'Times New Roman';">i<span style="font-family: 'Times New Roman';">nherent<span style="font-family: 'Times New Roman';">ly public nature</span></span></span>, it became the focus of the<span style="font-family: 'Times New Roman';"> data mining of</span> real-time social signals. (Friendster, MySpace, Yahoo! 360 and F<span style="font-family: 'Times New Roman';">acebook were more follower<span style="font-family: 'Times New Roman';"> and private<span style="font-family: 'Times New Roman';"><span style="font-family: 'Times New Roman';">-</span>circle oriented.)</span></span></span> Early social search start-ups Summize and Topsy introduced means to rapidly search the public corpus of Twitter content leveraging the symbols users invented to demarcate identity handles (@...) from conversation threads (#...) in tweets. Topsy focused on who was sharing certain topics, and the authority of the source. Summize <span style="font-family: 'Times New Roman';">focused on <span style="font-family: 'Times New Roman';">"who" was sharing and</span> </span>“what” that was shared. Tweetdeck introduced a tool to sift multiple threads in parallel based on these topic and author threads. Collecta introduced means <span style="font-family: 'Times New Roman';">to</span> sift the broad corpus of real-time media <span style="font-family: 'Times New Roman';">including and beyon<span style="font-family: 'Times New Roman';">d Twitter </span></span>using XMPP sift<span style="font-family: 'Times New Roman';"> by subject matter in an </span>ephemeral and immediate relevancy analysis. Google launched “Google Realtime” to add social media queries to its traditional web content search. Bing, enabled users to connect to a Facebook account to see content that a user’s friends had shared on that social network.</span></span></div>
<span style="color: #f3f3f3; font-size: large;"><span style="font-family: Times,"Times New Roman",serif;"><br /></span></span>
<br />
<div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;">
<span style="color: #f3f3f3; font-size: large;"><span style="font-family: Times,"Times New Roman",serif;"><span style="background-color: transparent; font-style: normal; font-variant: normal; font-weight: bold; text-decoration: none; vertical-align: baseline;">Social Sharing as currency of the moment</span></span></span></div>
<div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;">
<span style="color: #f3f3f3; font-size: large;"><span id="docs-internal-guid-221886c1-a8bd-6321-f6ae-985a72a553f1" style="background-color: transparent; font-family: 'Times New Roman'; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;">One advantage that social networks (like Facebook, Twitter, Google+, Ning, LiveJournal, Nextdoor) have in terms of understanding the real-time web, is the nature of the public act of sharing. When a social media user makes a public or limited-circle share of a piece of web content, this is a vote of interest for the specific piece of content. Contrast this to the Pagerank vote a publisher makes when they embed an anchor-text link. Every user in the new schema is elevated to the level of attention-measuring that Pagerank had previously attributed only to webmasters. Think also how the unique distributed feedback system creates the ultimate anti-spam data point for algorithmic engines. It’s fairly easy to spam anchor text links these days. It’s very difficult to simulate 100,000 distributed people reacting to a piece of web content. When a piece of content is tweeted, liked or +1’ed by 1000 or more users who are not connected to each other within a given social network, that can be taken as an unbiased popularity/legitimacy measure. </span></span></div>
<span style="color: #f3f3f3; font-size: large;"><span style="font-family: Times,"Times New Roman",serif;"><br /></span></span>
<br />
<div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;">
<span style="color: #f3f3f3; font-size: large;"><span style="font-family: Times,"Times New Roman",serif;"><span style="background-color: transparent; font-style: normal; font-variant: normal; font-weight: bold; text-decoration: none; vertical-align: baseline;">Closed data, siloes- The web's current challenge for the next generation's search engines</span></span></span></div>
<div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;">
<div dir="ltr" id="docs-internal-guid-221886c1-a8c1-e4eb-233c-d4ef0471051f" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;">
<span style="color: #f3f3f3; font-size: large;"><span style="background-color: transparent; font-family: 'Times New Roman'; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;">While this data is very valuable for the social network itself, to tailor its own engagement approaches for users, it also of value outside the network. <span style="font-family: 'Times New Roman';">However, these days t</span>he data tends to be coveted by the platform owners, for good reason. There are privacy concerns of users of course, who you want to encourage to use the system with the privacy settings they expect around their shared content. And also companies like to keep their internal network data closed because it can unleash future economic potential in the form of relationships with marketers. </span></span></div>
<span style="color: #f3f3f3; font-size: large;"><br /></span>
<br />
<div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;">
<span style="color: #f3f3f3; font-size: large;"><span style="background-color: transparent; font-family: 'Times New Roman'; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;">The Twitter APIs, as an example, generated a particularly interesting signal from an external perspective because of the public-viewability of any single post. Most user's understand that “tweets” were inherently public. This is why so many developers launched tools to process and digest the feed. When Twitter shut down the API access to Google and other web crawlers, there arose a need for Google to have it’s own microblog feature to replicate this real-time signal. Twitter needed to shut down access to the broader web to prepare for its IPO, shareholder sense of owner<span style="font-family: 'Times New Roman';">sh<span style="font-family: 'Times New Roman';">i</span>p</span> and upcoming in-network advertising platforms. <span style="font-family: 'Times New Roman';">But i</span>t's success as an open platform tool shows the promise for future entrants to replicate the strategy of its early days of openness.</span></span></div>
<span style="color: #f3f3f3; font-size: large;"><br /></span>
<br />
<div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;">
<span style="color: #f3f3f3; font-size: large;"><span style="background-color: transparent; font-family: 'Times New Roman'; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;">A real-time barometer of what is popular with Internet users is an invaluable tool for refining any search engine. Keeping up with the rapid evolving nature of the web requires more unconstrained sources for showing interest<span style="font-family: 'Times New Roman';"> and </span>relevancy<span style="font-family: 'Times New Roman';">-</span>signals as a feedback mechanism to web publishers. Also web publishers of tomorrow need more ways to get their content published to new audiences. As the web is a dynamically growing entity, the opening up of social media services to allow users to publish more feedback on the web in public view is a promising step for the future of other search platforms. </span></span></div>
<span style="color: #f3f3f3; font-size: large;"><br /></span>
<span style="color: #f3f3f3; font-size: large;"><span style="background-color: transparent; font-family: 'Times New Roman'; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;">Though we face a highly siloed web ecosystem now. Continued drive to revive openness is a boon to the tools that will emerge in coming years.</span></span><br />
<span style="color: #f3f3f3; font-size: large;"><br /></span>
<span style="color: #f3f3f3; font-size: large;"><br /></span>
<span style="color: #f3f3f3; font-size: large;"><br /></span>
<span style="color: #f3f3f3; font-size: large;"><span style="background-color: transparent; font-family: 'Times New Roman'; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;">(Perspective of the author: <span style="font-family: 'Times New Roman';">I wo<span style="font-family: 'Times New Roman';">rked at LookSmart, Overture, Yahoo! and Collecta <span style="font-family: 'Times New Roman';">over a span of </span>1998 to 2010<span style="font-family: 'Times New Roman';">. S</span>o these industry shift<span style="font-family: 'Times New Roman';">s were witnessed from the perspective of someone representing these comp<span style="font-family: 'Times New Roman';">anies through the <span style="font-family: 'Times New Roman';">shift in the technologies available to us<span style="font-family: 'Times New Roman';">.)</span></span></span></span></span></span></span></span><br />
<span style="color: #f3f3f3; font-size: large;"><br />
<span style="background-color: transparent; font-family: 'Times New Roman'; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;"><span style="font-family: 'Times New Roman';"><span style="font-family: 'Times New Roman';"><span style="font-family: 'Times New Roman';"><span style="font-family: 'Times New Roman';"><span style="font-family: 'Times New Roman';"><span style="font-family: 'Times New Roman';"> </span></span></span></span></span></span> </span></span></div>
ncubeeighthttp://www.blogger.com/profile/07874174268038500927noreply@blogger.com0tag:blogger.com,1999:blog-4288746931605915378.post-47886383647780941642012-12-22T19:00:00.001-08:002014-12-24T09:10:36.863-08:00Business relationships in Japan<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiOsCuYZTapr2JMzbK9S_WVJ8qQaq7S5OPgN_EF5Dt0T-b8Eg6HruOYrTn8gT4b21XgFiAyHK0w5wcOBo0sMIhPT4wMrMyG6j5g9V-EqSYDvl60WmmoieFIwxm9ARxA7b4sD1JNsKj71Ew/s1600/BTLSJP_Team.jpg" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiOsCuYZTapr2JMzbK9S_WVJ8qQaq7S5OPgN_EF5Dt0T-b8Eg6HruOYrTn8gT4b21XgFiAyHK0w5wcOBo0sMIhPT4wMrMyG6j5g9V-EqSYDvl60WmmoieFIwxm9ARxA7b4sD1JNsKj71Ew/s320/BTLSJP_Team.jpg" height="196" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">My team at BTLookSmart Japan</td></tr>
</tbody></table>
I had the pleasure of launching a business in Japan in 1999.* It was one of the early entrants in the business of search engine syndication, an industry that used to be dominated by Altavista, Inktomi, Fast Search and Transfer, and is now dominated by Google and Bing after many iterations in the business model. <br />
<br />
It was one of the most educational phases of my career. The Japanese partnerships I built showed me that much of what I'd been taught in university about Japanese business was wrong. Japanese companies were not averse to partnering with foreign firms. Japanese customers did not necessarily favor Japanese-made products over foreign products. These myths were touted by Kodak against Fuji Film in their <a href="http://www.internationalecon.com/fairtrade/fairpapers/ddaniels.html" target="_blank">WTO complaints</a> against Japan in general and Fuji specifically. And when I was in university, I was spoon-fed some of these same myths by some representatives in the American congress who favored market-protectionism over enhanced global trade opportunities.<br />
<br />
What I learned in college that proved very true was that Japanese business hinges on relationships even over profit. Surely this is primarily what Kodak had difficulty with. It's very difficult to sell in Japan if you don't invest a significant amount in the relationships that are essential to the fabric of society there. Fuji Film had invested tremendous amounts in social relationship and brand. It wasn't mere shelf space that determined Fuji's advantage in the market, it was market presence and people. <br />
<br />
One of my favorite memories in Japan was after my first year after launch of our search engine. (The first year was tremendous fun as we slowly accumulated partnership after partnership by investing enormous amounts of time and resources building an exceptional Japanese product. But for the point of this story, I'll defer that for a different blog post.) By 2001 We had a large number of leading Japanese portals powered by our search products, even with a new entrance by Google. <br />
<br />
The time for this story was the beginning of our second year in market, time for our contractual renewals. (Contracts auto-renewed by mutual consent.) I was visiting each partner to gather their feedback on how we could make our services better and ask each if they would like to continue working with us. One of them informed me that they'd received a bid of 1$Million guaranteed payment to switch from our search engine to another. I told them that I was unable to match the offer, as our business model was based on indexing services, not guaranteed payments. I thanked them for being such a good partner over the first year of our business there. I prepared to wrap up the meeting because I believed my not offering to match the bid would leave us at an awkward impasse.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
My counterpart smiled at me and said, (translated) "We're not going to accept it! After all, they never pick up the phone when we call them. You do. And you come to visit us and listen to our feedback. That's more valuable to us." I was completely stunned, flattered and honored that my partner considered my service to be worth more than a $1 Million buyout check. We renewed and they remained our partner until I left the company.<br />
<br />
I've tried to dispel myths about Japanese culture when I've heard them and tried to encourage businesses to seek partnerships and distributions in Japan in spite of the difficulties Americans tend to anticipate there. And I highly encourage anyone who is interested in learning about Japanese business and trade opportunities to explore the resources at the Japan External Trade Organization. http://www.jetro.go.jp/ Japan is of course one of the largest trading partners with the United States. But it is also one of the most enjoyable places I've ever done business.<br />
<br />
<br />
<br />
<br />
*The company I launched in Japan was a branch of the global BTLookSmart brand. BTLookSmart was a joint venture between British Telecom and Looksmart Ltd. to distribute web directories and web search on a model similar to Yahoo!'s destination portal model, making it affordable to have site search that was not powered by one of the global algorithmic search providers. BTLookSmart Japan Kabushiki Gaisha achieved 70% market reach before it was acquired by ValueCommerce Japan, which was 50% owned by Yahoo! Japan. The Looksmart Japan tools our team built were integrated into Yahoo! Japan's "MatchSmart" content advertising engine. Subsequently Yahoo! Japan acquired all assets of ValueCommerce Japan. After my departure from BTLookSmart Japan, I joined Overture Services, which was acquired by Yahoo! Inc. I conducted business development across Yahoo! Asia and returned to Japan to work on mobile advertising with Overture Japan prior to the sale of Overture Japan to Yahoo! Japan. Never a dull moment in internet business.ncubeeighthttp://www.blogger.com/profile/07874174268038500927noreply@blogger.com0tag:blogger.com,1999:blog-4288746931605915378.post-23259419698542165482012-11-04T14:15:00.000-08:002012-11-04T14:36:34.079-08:00Nurturing the first wind to get to the secondThere are many good reasons not to run. But if you decide to run anyway, there are still issues that can get in your way. I'll write this to address some of the ways that I've found to optimize my run and make it quite enjoyable. As a rower friend of mine, Ken Clemmer, commented that rowing is the art of knowing what not to do as you strive for efficient motion. Running is simple enough. But there are some habits than can make running difficult or tedious. I hope some of this is valuable.<br />
<br />
I took up running when I was a agile teen thanks to the inspiration of my high school track team and friends. Over the years, thanks to genetics, exercise and nutrition, I have bulked up to an ungainly 6 foot tall stature. But as the nimbler sports seemed to recede from me, running stayed a constant. (I love rock climbing, but find it very challenging due to my weight to muscle ratio.) My biggest hurdle in running has been managing asthma. Asthma is caused by highly allergic bronchioles that swell in reaction to pollutants and pollen, often complicated by secondary factors as the lungs try to flush out the irritants with fluid. Exercise can exacerbate asthma symptoms for various reasons including cortisol release and abnormal breathing patterns. But exercise can also improve asthma management if done in a low stress and controlled fashion. It's my personal belief that running reduces dependence on medications and can lead to expanded lung capacity. Before running, it's important, if you're asthmatic, to get your allergies under control. Don't run if it's going to result in needing the use of bronchodilators. Excessive use of epinephrine (Primatene Mist) albuterol (Ventolin etc.) can lead to your lungs permanently altering their shape due to the manner of shallow breathing that asthmatics can fall into. <br />
<br />
I like to run with minimal visual or social distraction. Forests are great if you are near to one. But cities are filled with distractions and dangers which make excessive momentum a bad thing. Indoor running can minimize the impact of pollen, as well as the risk of being injured by moving vehicles or territorial animals. When on treadmills, I avoid mirrors and TV screens. You want to sink into the moment in a run, rather than be casting about for visual stimulus or being overly self-conscious about getting sweaty and exhausted in front of others.<br />
<br />
Music can be an excellent tool for focus in a run. I also find running to be a great mental state for appreciating subtleties of my favorite music. So if you live in a country where you can gain access to a musical device and a library of good music, I recommend experimenting with different songs that have BPM (beats per minute) that you can pace your run to. Slower music may compel you to pace your stride longer, faster BPM may compel you to run in a manic or fast manner. So it's good to experiment with music that suits your desired pace and body size. <br />
<br />
I think of three forces of forward propulsion in my run.<br />
1) Desire to run. It's easier for me to have a good run in the mornings. If you're not in a good mental state, your run can be sloppy and unrewarding. It's better not to run at all than to force it and have bad associations with your run that are actually due to something else. A good mood is enhanced by a run. A bad mood seldom is.<br />
<br />
2) The amount of sugars in your bloodstream. Make sure that you're well nourished, but not bloated by a meal. The previous day's meal is probably going to give you the energy you need for today's exercise. I think it's great to have juice before a run. Your body isn't going to be doing any meaningful digestion during a run, so complex carbohydrates aren't going to improve your energy much if consumed immediately prior to the run.<br />
<br />
3) The amount of oxygen you can keep in the bloodstream to rapidly metabolize those sugars. This is the biggest factor for me. So I'll go into extensive detail about it. This is the core of my running experience. The first two factors are important. But breath is the most critical. <br />
<br />
To optimize breath, there are a few things you can consider. I always start a run slowly, pacing my breath evenly with my stride. Typically two paces for every in-breath, and two for every out-breath. Starting a run fast can deplete the oxygen in your blood rapidly and set up a pattern of breathing to compensate that will quickly exhaust you as you hyperventilate to compensate for the rapid surge of speed. Long ago I learned the trick of pursing your lips to blow air at pressure like a silent whistle. This increases the pressure in your lungs. At higher pressures, hemoglobin in your blood will take in Oxygen at a higher rate. (This also can help with hypoxia when you are mountain climbing.) If you feel momentarily short of breath, in addition to altering your stride or pace, try pursed-lipped out-breaths until you feel your blood oxygen level is balanced.<br />
<br />
If you've ever wondered why breathing too much or too fast could trigger bronchiole constriction, I recommend some reading on Ukrainian doctor Konstantin Pavlovich Buteyko's theories. His concepts in general are based on the idea that hyperventilating causes imbalance of gas intake to the bloodstream, other than oxygen. Constriction of the bronchioles by his theory is an adaptive reaction to limit the imbalance in the bloodstream. While I'm not a practitioner of formal methods of Buteyko, this concept has helped me understand a bit better why the higher-lung panting, that chronic asthmatics typically do, actually exacerbates symptoms in an "asthma attack". <br />
<br />
To avoid breathing too rapidly when running, focus on the out-breath. Asthmatics tend to focus on breathing in, rather than on breathing out. While these would seem equivalent. Altering the focus can result in deeper breathing patterns and using your full lung capacity, which in turn leads to slow deep breath rather than manic panting. When running, if you suddenly feel the "itch" feeling of bronchiole constriction, slow/lengthen your pace, slow your breathing to longer out-breaths and then determine if you should continue. Sudden stops can exacerbate symptoms too. So it's good to view everything about exercise as gradual.<br />
<br />
When your breathing is optimal in a run, you can unlock a tremendous amount of energy and endurance. It typically takes a quarter or half a mile of running before my pace is optimized. This is when the run gets really fun. It's a feeling of being zoned-in. Time seems to slow. You'll be aware of the flow of your body in ways that you can make subtle adjustments. After a mile or so, you may get a feeling of "second wind", as your run becomes very calm. Your breathing may slow even more. If you want to keep going, you can pay attention to a few other things that will make your running endurance better.<br />
<br />
There are many good instructions on running stride. I'm not much of an expert here. What I focus on is having my head balanced like a golf ball on a tee, with little neck muscle involvement, and no throat constriction. I tend to land slightly on the ball of the foot in my stride instead of the heel and avoid excessive upward motion or shock to the heel/knees. I tend to relax my arms. Holding them up can make me speed my stride. Letting them hang lower lets me have a longer slower stride which I enjoy. I give little thought to speed. If I'm tread-milling, speed is only weighed against the blood sugar I feel. High energy times I'll run faster with longer strides. Low energy times I'll run slower with shorter strides.<br />
<br />
A Buddhist abbess once commented in a Zen Center dharma talk that as your practice improves the inefficient things you do will fall away until there's nothing but the practice. As I get deep into second wind, my run feels like this. I'm able to breathe very calmly and my thoughts can sink into the music I am enjoying and the splendor of what it feels like to be a human. (I regret I don't recall the name of the abbess who gave the talk. http://sfzc.org/ was the venue.)<br />
<br />
<br />
This article is dedicated to the memory of Robert Volberg. Restauranteur extraordinaire. Founder of Angeline's, Berkeley. Robert died of an asthma attack June 23rd, 2010.ncubeeighthttp://www.blogger.com/profile/07874174268038500927noreply@blogger.com0