Wednesday, February 15, 2017

https://en.wikipedia.org/wiki/Beowulf
A long time ago I remember reading Stephen Pinker discussing the evolution of language.  I had read Beowulf, Chaucer and Shakespeare, so I was quite interested in these linguistic adaptations over time.  Language shifts rapidly through the ages, to the  point that even English of 500 years ago sounds foreign to us now.  His thesis in the piece was about how language is going to shift toward the Chinese pronunciation of it.  Essentially, the majority of speakers will determine the rules of the language’s direction.  There are more Chinese in the world than native English speakers, so as they adopt and adapt the language, more of us will speak like the greater factions of our language’s custodians.  The future speakers of English, will determine its course.  By force of "majority rules", language will go in the direction of its greatest use, which will be the Pangea of the global populace seeking common linguistic currency with others of foreign tongues.  Just as the US dollar is an “exchange currency” standard at present between foreign economies, English is the shortest path between any two ESL speakers, no matter which background.

Subsequently, I heard these concepts reiterated in a Scientific American podcast.  The concept there being that English, when spoken by those who learned it as a second language, is easier for other speakers to understand than native-spoken English.  British, Indian, Irish, Aussie, New Zealand and American English are relics in a shift, very fast, away from all of them.  As much as we appreciate each, they are all toast.  Corners will be cut, idiomatic usage will be lost, as the fastest path to information conveyance determines that path that language takes in its evolution.  English will continue to be a mutt language flavored by those who adopt and co-opt it.  Ultimately meaning that no matter what the original language was, the common use of it will be the rules of the future.  So we can say goodbye to grammar as native speakers know it.  There is a greater shift happening than our traditions.  And we must brace as this evolution takes us with it to a linguistic future determined by others.

I’m a person who has greatly appreciated idiomatic and aphoristic usage of English.  So I’m one of those, now old codgers, who cringes at the gradual degradation of language.  But I’m listening to an evolution in process, a shift toward a language of broader and greater utility.  So the cringes I feel, are reactions to the time-saving adaptations of our language as it becomes something greater than it has been in the past.  Brits likely thought/felt the same as their linguistic empire expanded.  Now is just a slightly stranger shift.

This evening I was in the kitchen, and I decided to ask Amazon Alexa to play some Led Zeppelin.  This was a band that used to exist in the 1970’s era during which I grew up.  I knew their entire corpus very well.  So when I started hearing one of my favorite songs, I knew this was not what I had asked for.  It was a good rendering for sure, but it was not Robert Plant singing.  Puzzled, I asked Alexa who was playing.  She responded “Lez Zeppelin”.  This was a new band to me.  A very good cover band I admit.  (You can read about them here: http://www.lezzeppelin.com/)
But why hadn't Alexa wanted to respond to my initial request?  Was it because Atlantic Records hadn't licensed Led Zeppelin's actual catalog for Amazon Prime subscribers?

Two things struck me.  First, we aren’t going to be tailoring our English to Chinese ESL common speech patterns as Mr. Pinker predicted.  We’re probably also going to be shifting our speech patterns to what Alexa, Siri, Cortana and Google Home can actually understand.  They are the new ESL vector that we hadn't anticipated a decade ago.  It is their use of English that will become conventional, as English is already the de facto language of computing, and therefore our language is now the slave to code.

What this means for that band (that used to be called Zeppelin) is that such entity will no longer be discoverable.  In the future, if people say “Led Zeppelin” to Alexa, she’ll respond with Lez Zeppelin (the rights-available version of the band formerly known as "Led Zeppelin").  Give humanity 100 years or so, and the idea of a band called Led Zeppelin will seem strange to folk.  Five generations removed, nobody will care who the original author was.  The "rights" holder will be irrelevant.  The only thing that will matter in 100 years is what the bot suggests.

Our language isn't ours.  It is the path to the convenient.  In bot speak, names are approximate and rights (ignoring the stalwart protectors) are meaningless.  Our concepts of trademarks, rights ownership, etc. are going to be steam-rolled by other factors, other "agents" acting at the user's behest.  The language and the needs of the spontaneous are immediate!

Sunday, November 13, 2016



At last year’s Game Developers Conference I had the chance to experience new immersive video environments that are being created by game developers releasing titles for the new Oculus and HTC Vive and Google Daydream platforms.  One developer at the conference, Opaque Mulitimedia, demonstrated "Earthlight" which gave the participant an opportunity to crawl on the outside of the International Space Station as the earth rotated below.  In the simulation, a Microsoft Kinect sensor was following the position of my hands.  But what I saw in the visor was that my hands were enclosed in an astronaut’s suit.  The visual experience was so compelling that when my hands missed the rungs of the ladder I felt a palpable sense of urgency because the environment was so realistically depicted.  (The space station was rendered as a scale model of the actual space station using the "Unreal" game physics engine.)  The experience was so far beyond what I’d experienced a decade ago with the crowd-sourced simulated environments like Second Life, where artists create 3D worlds in a server-hosted environment that other people could visit as avatars.  

Since that time I’ve seen some fascinating demonstrations at Mozilla’s Virtual Reality developer events.  I’ve had the chance to witness a 360 degree video of a skydive, used the WoofbertVR application to visit real art gallery collections displayed in a simulated art gallery, spectated a simulated launch and lunar landing of Apollo 11, and browsed 360 photography depicting dozens of fascinating destinations around the globe.  This is quite a compelling and satisfying way to experience visual splendor depicted spatially.  With the New York Times and  iMax now entering the industry, we can anticipate an incredible surfeit of media content to take us to places in the world we might never have a chance to go.

Still the experiences of these simulated spaces seems very ethereal.  Which brings me to another emerging field.  At Mozilla Festival in London a few years ago, I had a chance to meet Yasuaki Kakehi of Keio University in Japan, who was demonstrating a haptic feedback device called Techtile.  The Techtile was akin to a microphone for physical feedback that could then be transmitted over the web to another mirror device.  When he put marbles in one cup, another person holding an empty cup could feel the rattle of the marbles as if the same marble impacts were happening on the sides of the empty cup held by the observer.  The sense was so realistic, it was hard to believe that it was entirely synthesized and transmitted over the Internet.  Subsequently, at the Consumer Electronics Show, I witnessed another of these haptic speakers.  But this one conveyed the sense not by mirroring precise physical impacts, but by giving precisely timed pulses, which the holder could feel as an implied sense of force direction without the device actually moving the user's hand at all.  It was a haptic illusion instead of a precise physical sensation.

As haptics work advances it has potential to impact common everyday experiences beyond the theoretical and experimental demonstrations I experienced.  This year haptic devices are available in the new Honda cars on sale this year as Road Departure Mitigation, whereby steering wheels can simulate rumble strips on the sides of a lane just by sensing the painted lines on the pavement with cameras.
I am also very excited to see this field expand to include music.  At Ryerson University's SMART lab, Dr. Maria Karam, Dr. Deborah Fels and Dr. Frank Russo applied the concepts of haptics and somatosensory depiction of music to people who didn't have the capability of appreciating music aurally.  Their first product, called the Emoti-chair breaks the frequency range of music to depict different audio qualities spatially to the listeners back.  This is based on the concept that the human cochlea is essentially a large coiled surface upon which sounds of different frequencies resonate and are felt at different locations.  While I don't have perfect pitch, I think having a spatial-perception of tonal scale would allow me to develop a cognitive sense of pitch correctness to compensate using a listening aid like this.  Fortunately, Dr. Karam is advancing this work to introduce new form factors to the commercial market in coming years.

Over many years I have had the chance to study various forms of folk percussion.  One of the most interesting drumming experiences I have had was a visit to Lombok, Indonesia where I had the chance to see a Gamelan performance in a small village along with the large Gendang Belek drums accompanying.  The Gendang Belek is a large barrel drum worn with a strap that goes over the shoulders.  When the drum is struck the reverberation is so fierce and powerful that it shakes the entire body, by resonating through the spine.  I had an opportunity to study Japanese Taiko while living in Japan.  The taiko, resonates in the listener by resonating in the chest.  But the experience of bone-conduction through the spine is altogether a more intense way to experience rhythm.

Because I am such an avid fan of physical experiences of music, I am frequently gravitating toward bassey music.  I tend to play it in a sub-woofer-heavy car stereo, or seek out experiences to hear this music in nightclub or festival performances where large speakers animate the lower frequencies of music.  I can imagine that if more people had the physical experience of drumming that I've had, instead of just the auditory experience of it, more people would enjoy making music themselves.


As more innovators like TADs Inc. (an offshoot of the Ryerson University project) bring physical experiences of music to the general consumer, I look forward to experiencing my music in greater depth.










Thursday, April 14, 2016


Back in 2005-2006 my friend Liesl told me about the coming age of chat bots.  I had a hard time imagining how people would embrace products that simulated human voice communication but were less “intelligent”.  She ended up building a company that allowed people to have polite automated service agents that you could program with a certain specific area of intelligence.  Upon launch she found that people spent a lot more time conversing with the bots than they did with the average human service agent.  I wondered if this was because it was harder to get questions answered, or if people just enjoyed the experience of conversing with the bots more than they enjoyed talking to people.  Perhaps when we know the customer service agent is paid hourly, we don't gab in excess.  But if it's chat bot you're talking to, we don't feel the need to be hasty?

Fast forwarding over a decade later, IBM has acquired her company into the Watson group.  During a dinner party we talked about Amazon’s Echo sitting on her porch.  She and her husband would occasionally make DJ requests to “Alexa” (the name for Echo’s internal chat bot) as if it was a person attending the party.  It was definitely seeming that the age of more intelligent bots is upon us.  Most folk who have experimented with speech-input products of the last decade have become accustomed to talking to bots in a robotic monotone devoid of accent because of the somewhat random speech capture mistakes that early technology was burdened with.  If the bots don't adapt to us, we go to them it seems, mimicking the 50's and 60's movies of how we've heard robotic voices depicted to us in science fiction films.

This month both Microsoft and Facebook have announced open bot APIs for their respective platforms.  Microsoft’s platform for integration is an open source "Bot Framework" that allows any web developer to re-purpose the code to inject new actions or content tools in the active discussion flow of their conversational chat bot called Cortana, which is built into the search box of every Windows 10 operating system they license.  They also demonstrated how the new bot framework allows their Skype messenger to respond to queries intelligently if they have the right libraries loaded. Amazon refers to the app-sockets for the Echo platform as "talents", whereby you load a specific field of intelligence into the speech engine to allow Alexa to query the external sources you wish.  I noticed that both Alexa team and Cortana team seem to be focusing on pizza ordering in both their product demos.  But one day we'll be able to query beyond the basic necessities.  In my early demonstration back in 2005 of the technology Liesl and Dr. Zakos (her cofounder) built, they had their chat bot ingest all my blog writings about folk percussion, then answer questions about certain topics that were in my personal blog.  If a bot narrows a question to a subject matter, its answers can be uncannily accurate to the field!

Facebook’s plan is to inject bot-intelligence into the main Facebook Messenger app.)  Their announcements actually seem to follow quite closely the concept Microsoft announced of developers being able to port in new capabilities for the chatting engines of each platform vendor.  It may be that both Microsoft and Facebook are planning for the social capabilities of their joint collaborations on the launch of Oculus, Facebook's immersive virtual environment of head-set based virtual world environments which run on Windows 10 machines.

The outliers in this era of chat bot openness are the Apple Siri and Ok Google speech tools that are like a centrally managed brain.  (Siri may query the web using specific sources like Wolfram Alpha, but most of the answers you get from either will be consistent with the answers others receive for similar questions.)  The thing that I think is very elegant about the approaches Amazon, Microsoft and Facebook are taking is that they make the knowledge engine of the core platform extensible in ways that a single company could not.  Also, the approach allows customers to personalize their experience of the platform by specifically adding new ported service to the tools.  My interest here is that the speech platforms will become much more like the Internet of today where we are used to having very diverse “content” experiences based on our personal preferences and proclivities.

It is very exciting to see that speech is becoming a very real and useful interface for interacting with computers.  While the content of the web is already one of the knowledge ports of these speech tools, the open-APIs of Cortana, Alexa and Facebook Messenger will usher in a very exciting new means to create compelling internet experiences.  My hope is that there is a bit of standardization so that a merchant like Domino's doesn't have to keep rebuilding their chat bot tools for each platform.

I remember my first experience having a typed conversation with the Dr. Know computer at the Oregon Museum of Science and Industry when I was a teenager.  It was a simulated Turing test program designed to give a reasonably acceptable experience of interacting with a computer in a human way.  While Dr. Know was able to artfully dodge or re-frame questions when it detected input that wasn’t in its knowledge database, I can see that the next generations of teenagers will be able to have exactly the same kind of experience I had in the 1980’s.  But their discussions will go in the direction of exploring knowledge and exploring logic structures of a mysterious mind instead of ending up in rhetorical Cul-de-sacs of the Dr. Know program.

While we may not chat with machines with quite the same intimacy of Spike Jonze’s character in “Her”, the days where we talk in robotic tones to operate with the last decade’s speech input systems is soon to end.  Each of these innovative companies is dealing with the hard questions of how to get us out of our stereotypes of robot behavior and get us back to acting like people again, returning to the main interface that humans have used for eons to interact with each other.  Ideally the technology will fade into the background and we'll start acting normally again instead of staring at screens and tapping fingers.




P.S  Of course Mozilla has several initiatives on speech in process.  We'll talk about those very soon.  But this post is just about how the other innovators in the industry are doing an admirable job making our machines more human-friendly.

Thursday, September 10, 2015

Bluetooth LE beacons and the coming hyper-local web of the physical world



Philz Coffee mobile single-serving brewing truck at San Francisco Marina
Recently, my wife and I were riding bikes around Fort Mason area on the San Francisco peninsula.  Lo-and-behold my wife sees someone with a Philz coffee cup walk by.  She says to herself, “Wait a tick! There’s no Philz in this neighborhood!”  San Franciscans are tribal about their preferred coffees.  We typically know all the physical locations of our favorite roasters and brewers.  My wife knows I’m a Philz-devotee.  So seeing a Philz cup outside of its natural habitat caught her attention.  Minutes later, we ran into the new Philz truck, parked on Marina blvd.  Booyah!

Phil Jaber in the Original Phiz Coffee Shop

This is the first time I had thought about the half-life of a coffee cup in the wild.  The various coffee roasting factions demarcate their turf using the coffee cups they give visitors as a sort of viral advertising strategy.  And the radius of inspiration lasts as long as it takes for a person to consume their beverage, which may be five minutes if a person is walking and drinking at a moderate pace.  This is plenty of time for one customer to inspire Pavlovian thirst reactions in a dozen passersby.

This brings me to the emerging tech trend of the season, the use of bluetooth beacons for transmitting location signals and web content.  (See Apple iBeacon and the Google Eddystone initiatives for the nitty gritty)  We can assume that first applications of these tools will be marketing related like the coffee cups, sending signals that span from a few feet to fifty feet depending on intensity of the signal wavelength.  But one can imagine a scenario where beacons of hundreds of varieties might talk to our wearable devices or phones, without intruding on our attention, in order to sift out topics, events and messages of specific interest to us personally.  As a first step, something has to be written to be read.

Tweetie Nearby View
There have been some interesting initiatives around hyper-local web content discovery in Augmented Reality style applications.  My favorites include Yelp Monocle which spatially rendered restaurant reviews over the viewfinder of a phone's camera, Loren Brichter's Tweetie app which allowed users to point their phone in any direction within the user's proximity to see what was being tweeted there, and Shopkick app that sends audio signals to customers' phones when their phones are listening for the high-pitched signals Shopkick transmitter sent, that are beyond human auditory range.  All of these are app-specific signals.  It becomes very interesting when these kinds of strategies are done in an open fashion that doesn't require a special app to consume it.  The web itself is the best means to move this kind of use case forward.  That is exactly what is happening with this new push to leverage bluetooth.  And of course bluetooth signals decay rapidly over short distances.  So they are only relevant to people nearby, for whom content can be tailored. 

Why is the idea of the decaying signal good?  Think about the movie Chef, and the use case that the protagonist had to tweet their location and updates while they drove across the country.  Doesn't make a whole lot of sense to use a global platform for a location-specific service does it?  Great marketing film for Twitter, but a ridiculous premise.  Chefs need to talk to their communities, not the world, when publicizing today's menu.  And a web where everyone has to manually follow sources and manage inbound information meticulously is a web that will inundate our attention.  When it comes to the things that can matter to us in the tangible world, we need it to speak to us when it's relevant and shut up at other times.  Otherwise, the signal/utility of the web gets lost in the noise.

Google's innovation with the "Eddystone URL" introduces the concept of the beacon being a web server.  The URL a beacon transmits can utilize any modern browser to connect the user to a broad array of web content associated with the specific location without needing a custom application to read it.  Every smart phone in existence can render and interact with web content published in http. 

Admin view of Estimote beacon 
Beacon developer Estimote is joining the Eddystone initiative, soon to support the new URL broadcasting as part of their existing line of bluetooth beacons.  Their current SDKs allow for custom app developers to map locations and tailor apps specific to those locations.  Once Eddystone URLs are integrated they will be readable by notification management tools like Google Now and probably soon custom scanners, mobile web browsers and lock-screen apps.

Once Google exposes support of beacon recognition in Android, the adoption of bluetooth contextual beacons could become fairly mainstream in large metropolitan areas.  (It will be even better if it's done in Android Open Source Project so that Android forked initiatives like Xiaomi and Kindle Fire can benefit from the innovations and efforts of "beacon publishers".) What this could do for our use of Internet tools in daily life is a great deal of simplification of daily tasks.  We will no longer need to have an app specifically to check bus schedules or get restaurant reviews, make reservations, etc.  Those scenarios will be able to happen on demand, as needed with very little hassle for us as users.

In the coming years the companies that provide our phones, browsers and other communications tools will be innovating ways to surface and manage these content signals as they proliferate.  So it is unlikely to be something many of us will need to manage actively.  But very soon the earliest iterations of augmented reality apps will start to surface in our mobile devices in compelling new ways that will allow the physical environment around us to animate and inform us when we want it to.  And it will be easy to ignore at all other times.

One step beyond the mere receiving and sorting of signals is the concept that we might transmit our own signals to beacon receivers in our proximity one day.  Imagine the concept of Vendor Relationship Management, popularized by Doc Searls, a means of us transmitting our preferences to the outside world and having information and services tailor themselves to us.  In a world where we express our wants, needs, opinions digitally, the digital-physical world might in turn tailor messages to us without need for physical action. 

First step for this wave of innovation to be truly useful for us will be to have all the digital world's wealth of subliminal content available to us as needed, nearby.  Second step will be the discovery/revealing in a manageable way.  (This is already in process.)  Third step will be the assertion of preference through the tools the OS, apps and browsers provide.  I think this is the area that will benefit substantially from developer innovation.

Monday, May 11, 2015

Mesh Networking for app delivery in Apple OSX and Windows GWX


The upcoming release of Windows 10 operating system is exciting for a number of bold new technologies Microsoft plans to introduce, including the new Microsoft Edge browser and Cortana speech-recognition tools.  This release is called GWX for "Get Windows 10" and will reach all Windows users from version 7 to 8.1.  Particularly interesting to me is that it will be the first time Windows operating system pushes out software over mesh networks in a peer-to-peer (aka "P2P") model. 

Over a decade ago software tools for creating peer-to-peer and mesh networks proliferated as alternative approaches to bandwidth-intensive content delivery and task processing.  Allowing networked devices to mesh and delegate tasks remotely between each other avoids the burden of one-to-one connections between a computer and a central hosting server.  Through this process the originating host server can delegate tasks to other machines connected in the mesh and then turn its attention to other tasks while the function (be it a piece of content to be delivered/streamed or a calculation to be executed) cascades through the meshed devices where there is spare processing capacity.

Offloading one-to-one tasks to mesh networks can unburden infrastructure that provides connectivity to all end users.  So this is a general boon to the broader Internet infrastructure in terms of bandwidth availability.  While the byte volume that reaches the end user is the same, the number of copies sent is fewer.  (To picture this, consider a Netflix stream, which goes from a single server to a single computer, to a torrent stream that is served across a mesh over dozens of computers in the user's proximity.) 

Here are just a small list of initiatives that utilized mesh networking in the past:
SETI-at home (deciphering radio signals in space for pattern interpretation across 1000s of dormant PCs and Macs), Electric Sheep (Collaborative sharing of fractal graphic animations with crowd-sourced feedback), Skype (social networking, telephony, prior to the Microsoft acquisition)
Veoh (video streaming), Bit Torrent (file sharing), Napster (Music sharing), One Laptop per Child (Wifi connectivity in off-grid communities), Firechat (phones create a mesh over Bluetooth frequencies)

Meshing is emerging in software delivery primarily because of the benefit it offers in eliminating burden to Apple and Microsoft in download fulfillment.

 Apple's first introduction of this capability came in the Yosemite operating system update.  Previously, software downloads were managed by laptop/desktop computers and pushed through USB to peripherals like iPods, iPhones and iPads.  When these devices shifted from the hub and spoke model to be able to deliver updates directly over the air, two or more devices from a single wifi access point would make two or more different requests to the iTunes marketplace.  With Apple's new networked permissions flow, one download can be shared between all household computers and all peripherals.  It makes ecological sense to unburden the web from multiple versions of software going to the same person or household.  It benefits Apple directly to send fewer copies of software and serves the user no less.


Microsoft is going a step further with the upcoming Windows 10 release.  Their version of the app distribution method over mesh allows you to fetch copies of the Windows updates not just from those sources who may be familiar to you in your own Wi-Fi network.  Your computer may also decide to pull an update from some other unknown source on the broader Internet that is in your proximity.

What I find very interesting about this is that Microsoft had previously been very restrictive about software distribution processes.  Paid software products is their core business model after all.  So to introduce a process to mesh Windows machines in a peering network for software delivery demonstrates that the issues around software piracy and rights management has largely been resolved.

For more detail about the coming Windows 10 rollout, ZDNet has a very good update. 


Thursday, April 30, 2015

Calling Android users: Help Mozilla Map the World!

Many iPhone users may have wondered why Apple prompts them with a message saying “Location accuracy is improved when Wi-Fi is turned on” each time they choose to turn Wi-Fi off.  Why does a phone that has GPS (Global Positioning Satellite) capability need to use Wi-Fi to determine it’s location?

The reason is fairly simple.  There are of course thousands of radio frequencies traveling through the walls of buildings all around us.  What makes Wi-Fi frequency (or even bluetooth) particularly useful for location mapping is that the frequency travels a relatively short distance before it decays, due to how low energy the Wi-Fi wavelengths are.  A combination of three or more Wi-Fi signals can be used in a very small area by a phone to triangulate locations on a map in the same manner that earthquake shockwave strengths can be used to triangulate epicenters.  Wi-Fi hubs don't need to transmit their locations to be useful.  Most are oblivious of their location.  It is the phone's interpretations of their signal strength and inferred location that creates the value to the phone's internal mapping capabilities.  No data that goes over the Wi-Fi frequency is  relevant to using radio for triangulation.  It is merely the signal strength/weakness that makes it useful for triangulation.  (Most Wi-Fi hubs are password protected and the data sent over them is encrypted.) 

Being able to let phone users determine their own location is of keen interest to developers who can’t make location-based-services work without fairly precise location determinations.  The developers don't want to track the users per se.  They want the users to be able to self-determine location when they request a service at a precise location in space.  (Say requesting a Lyft ride or checking in at a local eatery.)  There are a broad range of businesses that try to help phones accurately orient themselves on maps.  The data that each application developer uses may be different across a range of phones.  Android, Windows and iPhones all have different data sources for this, which can make it frustrating to have consistency of app experience for many users, even when they’re all using the same basic application.

At Mozilla, we think the best way to solve this problem is to create an open source solution.  We are app developers ourselves and we want our users to have consistent quality of experience, along with all the websites that our users access using our browsers and phones.  If we make location data accessible to developers, we should be able to help Internet users navigate their world more consistently.  By doing it in an open source way, dozens of phone vendors and app developers can utilize this open data source without cumbersome and expensive contracts that are sometimes imposed by location service vendors.  And as Mozilla we do this in a way that empowers users to make personal choice as to whether they wish to participate in data contribution or not.

How can I help?  There are two ways Firefox users can get involved.  (Several ways that developers can help.)  We have two applications for Android that have the capability to “stumble” Wi-Fi locations.

The first app is called “Mozilla Stumbler” and is available for free download in the Google Play store. (https://play.google.com/store/apps/details?id=org.mozilla.mozstumbler)  By opening MozStumbler and letting it collect radio frequencies around you, you are able to help the location database register those frequencies so that future users can determine their location.  None of the data your Android phone contributes can be specifically tied to you.  It’s collecting the ambient radio signals just for the purpose of determining map accuracy.  To make it fun to use MozStumbler, we have also created a leaderboard for users to keep track of their contributions to the database. 


Second app is our Firefox mobile browser that runs on the Android operating system.  (If it becomes possible to stumble on other operating systems, I’ll post an update to this blog.)  You need to take a couple of steps to enable background stumbling on your Firefox browser.  Specifically, you have to opt-in to share location data to Mozilla.  To do this, first download Firefox on your Android device.  On the first run you should get a prompt on what data you want to share with Mozilla.  If you bypassed that step, or installed Firefox a long time ago, here’s how to find the setting:



1) Click on the three dots at the right side of the Firefox browser chrome then select "Settings" (Above image)

2) Select Mozilla (Right image)

Check the box that says “Help Mozilla map the world! Share approximate Wi-Fi and cellular location of your device to improve our geolocation services.” (Below image)

If you ever want to change your settings, you can return to the settings of Firefox, or you can view your Android device's main settings menu on this path: Settings>Personal>Location which is the same place where you can see all the applications you've previously granted access to look up your physical location.

The benefit of the data contributed is manifold:
1) Firefox users on PCs (which do not have GPS sensors) will be able to determine their positions based on the frequency of the WiFi hotspots they use rather than having to continually require users to type in specific location requests. 
2) Apps on Firefox Operating System and websites that load in Firefox that use location services will perform more accurately and rapidly over time.
3) Other developers who want to build mobile applications and browsers will be able to have affordable access to location service tools.  So your contribution will foster the open source developer community.

And in addition to the benefits above, my colleague Robert Kaiser points out that even devices with GPS chips can benefit from getting Wi-Fi validation in the following way:
"1) When trying to get a location via GPS, it takes some time until the chip actually has seen signals from enough satellites to determine a location ("get a fix"). Scanning the visible wi-fi signals is faster than that, so getting an initial location is faster that way (and who wants to wait even half a minute until the phone can even start the search for the nearest restaurant or cafe?).
2) The location from this wifi triangulation can be fed into the GPS system, which enables it to know which satellites it roughly should expect to see and therefore get a fix on those sooner (Firefox OS at least is doing that).
3) In cities or buildings, signals from GPS satellites get reflected or absorbed by walls, often making the GPS position inaccurate or not being able to get a fix at all - while you might still see enough wi-fi signals to determine a position."

Thank you for helping improve Mozilla Location Services.

If you'd like to read more about Mozilla Location Services please visit:
https://location.services.mozilla.com/
To see how well our map currently covers your region, visit:
https://location.services.mozilla.com/map#2/15.0/10.0
If you are a developer, you can also integrate our open source code directly into your own app to enable your users to stumble for fun as well.  Code is available here: https://github.com/mozilla/DemoStumbler
For an in-depth write-up on the launch of the Mozilla Location Service please read Hanno's blog here: http://blog.hannosch.eu/2013/12/mozilla-location-service-what-why-and.html
For a discussion of the issues on privacy management view Gervase's blog:
http://blog.gerv.net/2013/10/location-services-and-privacy/








Wednesday, December 24, 2014

Launching crowd-funded volunteer developer projects with Mozilla


I joined Mozilla as a staff contributor three years ago working on API partnerships, early Firefox Operating System content partnerships for our phones and identity management solutions for the web.  I found Mozilla to be one of the most compelling work environments of my career.  Beyond the prestige of working on a product that reaches 1 in 5 Internet users globally, I found the passion and inspiration of our community and coworkers infectious. 

A huge number of people who work on Firefox products and tools are volunteers.  It’s amazing to be surrounded by people who work 100% based on the passion they have for their contribution to the web and its benefits to the global community.  At the FISL 2013 conference in Brazil, I had my first experience working with the Mozilla Representatives and community volunteers.

I bumped into an engineer, André Natal, who wanted to create a speech-to-text engine for Firefox operating system.  (This enables tools like voice-triggered web search and map navigation.)  With a few connections and recommendations, he was on his way, ultimately releasing Firefox OS Marketplace’s first speech recognition app for the Brazilian market and going on to incorporate the capability into our core Firefox operating system.  Another engineer, Fábio Magnoni, helped us bug-fix our emergency dialer for the phones and networked our phone launch with dozens of publishers in Brazil.  Both of these engineers had their own day jobs, but contributing to Mozilla and Firefox products was their passion.

When I asked them why they worked on Firefox products, it was just their own excitement for the challenge and the experience of working with others on a common goal of facilitating the open developer environment that motivated them.
Webmaker event 6619
Webmaker Training Camp, Belize

This year I coordinated my first volunteer developer event.  It will be a conference for young developers in the northern region of Corozal in Belize.  This is the story of its inception.

Last spring my wife and I took a trip to Belize on a quest to discover as many Mayan ruins as we could.  A chance dinner at the Cerros Beach Resort became a new side-project for me, when the founders of the resort, Jenny and Bill Bellerjeau, heard about the Mozilla mission and invited us to host an event at their resort.  He wanted the kids of his village to have a chance to learn about the Internet.  A quick post on social media resulted in a ground-swell of interest among my coworkers and friends. 

Mozilla has a community of engaged evangelists called Mozilla Representatives who host "teach the web" events in their countries.  Over 2000 are conducted annually around the world.  (Ours is going to be Webmaker Event 6619)

Upon return to the US, I found that Mozilla had no local representatives in Belize.  So we decided to hold a WebMaker event at Cerros Beach Resort, flying in four experienced Mozillians to lead a "teach the web" session.   Four Mozillians from Costa Rica, Mexico, Canada and the UK volunteered to teach 30 kids over their winter holidays.  Three of our US staff volunteered time to prepare the fundraising campaign and prepare donated phones and computers for the project.

We used the crowd-funding platform Indiegogo to raise donations for the project and to keep in touch with our donors about the progress of our campaign.  (Indiegogo gave us a dedicated partner page at https://www.indiegogo.com/partners/mozilla)  We received 25 donations from our friends and connections, 11 donated computers, 30 phones acquired through Mozilla's partner ZTE, 15 SIM cards donated by Belize Telemedia, and the real estate to host the event from Cerros Beach Resort.

This week trainers are headed to Belize.  They will teach students in the region how to make web sites, how to create phone apps and more esoteric topics that initially interested each of the trainers to get involved with web coding.  (Python, site localization, database and web query syntax)

It’s incredible to see a vision go from a simple brainstorm over dinner to a full week-long training event in a foreign country.  Working with Mozillians like Andrea Wood, Mike PoessyKory Salsbury, Shane Caraveo, Matthew Ruttley and Julio Gómez Sánchez is part of what makes me proud to be a Mozillian.

Profound thanks to Bill and Jenny Bellerjeau for their inspiration and generous offer to host this event and for all the participants, donors and volunteers for making this happen!

We hope that others find the Indiegogo platform useful in funding their own developer projects in years to come.




To hear the intros from each of the project contributors visit: https://www.indiegogo.com/projects/corozal-web-training-camp#activity  We will have output from the training on the Tumblr page for the event: http://corozalwebtrainingcamp.tumblr.com/