Wednesday, December 24, 2014

Launching crowd-funded volunteer developer projects with Mozilla


I joined Mozilla as a staff contributor three years ago working on API partnerships, early Firefox Operating System content partnerships for our phones and identity management solutions for the web.  I found Mozilla to be one of the most compelling work environments of my career.  Beyond the prestige of working on a product that reaches 1 in 5 Internet users globally, I found the passion and inspiration of our community and coworkers infectious. 

A huge number of people who work on Firefox products and tools are volunteers.  It’s amazing to be surrounded by people who work 100% based on the passion they have for their contribution to the web and its benefits to the global community.  At the FISL 2013 conference in Brazil, I had my first experience working with the Mozilla Representatives and community volunteers.

I bumped into an engineer, André Natal, who wanted to create a speech-to-text engine for Firefox operating system.  (This enables tools like voice-triggered web search and map navigation.)  With a few connections and recommendations, he was on his way, ultimately releasing Firefox OS Marketplace’s first speech recognition app for the Brazilian market and going on to incorporate the capability into our core Firefox operating system.  Another engineer, Fábio Magnoni, helped us bug-fix our emergency dialer for the phones and networked our phone launch with dozens of publishers in Brazil.  Both of these engineers had their own day jobs, but contributing to Mozilla and Firefox products was their passion.

When I asked them why they worked on Firefox products, it was just their own excitement for the challenge and the experience of working with others on a common goal of facilitating the open developer environment that motivated them.
Webmaker event 6619
Webmaker Training Camp, Belize

This year I coordinated my first volunteer developer event.  It will be a conference for young developers in the northern region of Corozal in Belize.  This is the story of its inception.

Last spring my wife and I took a trip to Belize on a quest to discover as many Mayan ruins as we could.  A chance dinner at the Cerros Beach Resort became a new side-project for me, when the founders of the resort, Jenny and Bill Bellerjeau, heard about the Mozilla mission and invited us to host an event at their resort.  He wanted the kids of his village to have a chance to learn about the Internet.  A quick post on social media resulted in a ground-swell of interest among my coworkers and friends. 

Mozilla has a community of engaged evangelists called Mozilla Representatives who host "teach the web" events in their countries.  Over 2000 are conducted annually around the world.  (Ours is going to be Webmaker Event 6619)

Upon return to the US, I found that Mozilla had no local representatives in Belize.  So we decided to hold a WebMaker event at Cerros Beach Resort, flying in four experienced Mozillians to lead a "teach the web" session.   Four Mozillians from Costa Rica, Mexico, Canada and the UK volunteered to teach 30 kids over their winter holidays.  Three of our US staff volunteered time to prepare the fundraising campaign and prepare donated phones and computers for the project.

We used the crowd-funding platform Indiegogo to raise donations for the project and to keep in touch with our donors about the progress of our campaign.  (Indiegogo gave us a dedicated partner page at https://www.indiegogo.com/partners/mozilla)  We received 25 donations from our friends and connections, 11 donated computers, 30 phones acquired through Mozilla's partner ZTE, 15 SIM cards donated by Belize Telemedia, and the real estate to host the event from Cerros Beach Resort.

This week trainers are headed to Belize.  They will teach students in the region how to make web sites, how to create phone apps and more esoteric topics that initially interested each of the trainers to get involved with web coding.  (Python, site localization, database and web query syntax)

It’s incredible to see a vision go from a simple brainstorm over dinner to a full week-long training event in a foreign country.  Working with Mozillians like Andrea Wood, Mike PoessyKory Salsbury, Shane Caraveo, Matthew Ruttley and Julio Gómez Sánchez is part of what makes me proud to be a Mozillian.

Profound thanks to Bill and Jenny Bellerjeau for their inspiration and generous offer to host this event and for all the participants, donors and volunteers for making this happen!

We hope that others find the Indiegogo platform useful in funding their own developer projects in years to come.




To hear the intros from each of the project contributors visit: https://www.indiegogo.com/projects/corozal-web-training-camp#activity  We will have output from the training on the Tumblr page for the event: http://corozalwebtrainingcamp.tumblr.com/

Tuesday, September 23, 2014

The transition of web search - From open to fragmented and proprietary



The beginning: open-web indexes
Web indexing (also called crawling or spidering) was a means of creating a private index of published html documents on the web in the 1990s.  Popular search engines would use keyword position in the document, frequency of keyword mention, font style and meta-tag data to determine a public document’s relevance to a given keyword query.  This manner of indexing and searching was the first wave of search engine mechanics.  Crawlers could start at the top node of any public domain and follow any link from the hosting center to make a replica of the content that was being posted daily.  (DNS, the domain name lookup service managed by internet service providers, was also a public resource for translating internet protocol identities into legible domain names, broadcasting new site URLs daily.)  The open web allowed multiple companies to launch differentiated services indexing the same open web.  AOL, Yahoo!, Altavista, HotBot, Lycos and Excite were all able to promote distinct services through their own portals and search interfaces and competition thrived.  Internet users, publishers and software companies benefited from this openness.


Early spam-evasion
With the emergence of search engine optimization (SEO), many of these engines had problems with irrelevant content surfacing for specific popular keywords because the source of the ranking was based on what a single publisher posted on its own domains about itself.  This approach was obviously vulnerable to abuse.  Some publishers would “keyword-stuff” their meta-tags or put footers of keywords in invisible fonts, strategies meant to dupe users into visiting irrelevant pages to inflate traffic volumes and ad dollars through extra impression volume.  Each portal used manual intervention models to address SEO spam.  Yahoo! had it’s own editorial directory that served the top layer of results, which prevented SEO spam from surfacing to the top of a results page.  Microsoft, used LookSmart.  AOL and Google used the Netscape Open Directory Project (aka DMOZ) as human validation of site relevance to a specific subject category.  

While Yahoo!’s directory was proprietary and LookSmart’s was licensed in a syndication model, DMOZ was provided as a free tool to any site that needed a search directory.  It was curated by a group of volunteers who felt prestige from being category editors of different subject matter branches of the directory treeThis model of human attention as a "relevancy multiplier" for the directories was leveraged by Google to pull ahead in the anti-spam competition.  Larry Page included in his "Pagerank" algorithm a mechanism to determine how many web publishers inserted links to specific domains, and indexed those as subject matter authorities for the topic of the reference links.  Links that publishers embedded to other pages were seen as an indication of curatorial attention of the linking site that merited the referenced page being displayed higher in the results.  The unique advantage of Pagerank was that it mostly ignored what publishers said about themselves and focused what others said about them, subverting the SEO spam practices of the era.  (The human intervention theme recurs later.)

Pagerank peer-review system, in contrast to the editor-review system of the directories, enabled Google to pull ahead in relevance as the web grew in scale, thereby unseating Inktomi as the algorithmic engine of the largest portal at the time, Yahoo.com.  Based on this success, Google transitioned from a backfill-search provider, like Inktomi, Fast and Looksmart to become a destination website like Altavista, Excite, Lycos and HotBot.  After winning AOL portal as a distribution partner, Google’s brand was finally entrenched. (Google insisted on brand name "powered by" messaging that established brand familiarity unlike the other white label search providers.)


The emergence of paid search and subsequent market consolidation
The growth in scale of the web through the early 2000’s required a significant investment in infrastructure.  It also forced the evolution of the business models for existing search providers to subsidize this growth.  LookSmart, Inktomi and Overture offered licensed search engines, in a hosted or feed-based model, which consolidated thousands of small-scale sites using single domain lander pages.  Unlike the destination-site search engines Altavista, Lycos, HotBot and Excite, these distributed search engines could be integrated in to the look and feel of the host site design.  (Integrated look and feel was not supported by Google.)  Based on the consolidated market share that these companies developed, pay-to-index (sponsored search) emerged.  LookSmart’s and Inktomi’s model was pay for inclusion in the index.  Goto/Overture was a bid for placement layer only on the top 3-5 positions above other search engine algorithmic results.  Bid for placement yielded higher returns for Overture and enabled price competition between the hosted search providers that ultimately favored portals turning to the Overture sponsored model.

Google built its own version of the Overture style of paid search, resulting in a lawsuit over Overture's US patent (USPTO patent number: 6,269,361).  Yahoo!, then powered by Google, acquired Overture, and resolved the suit in a pre-IPO stock trade with Google.  It then dropped Google as an algorithmic backfill provider and used the Overture revenue to build it’s own in-house search engine based on the acquired tools of Altavista, Inktomi and Fast combined with its existing directory.  Microsoft ultimately ended it’s dependency on Overture and Inktomi to launch it’s own search engine, which it later cross-licensed to Yahoo! in exchange for ad technology, sales collaboration and placement commitments.  This technology struggle and pricing battle resulted in the market we have at present, with the market consolidated in two major search engines with all portals using Bid-for-placement advertising as the business model.


Shift of landscape to users as publishers (aka “Web 2.0”)
With the boom of small-publishers, blogs and the emergence of social media, the publisher-centric web faced a significant challenge.  Google’s Pagerank is after all a publisher-centric signal.  Google and Bing indexing took weeks to surface new content in the web index so there was a significant lag from publication time to spidering by the crawler.  A need for a more timely experience of the web was emerging.  The fragmenting of publication platforms into different content types also presented a challenge.  Some content was published behind login “walled-gardens” no longer accessible to web crawlers.  Many actively blocked crawlers with "robot.txt" files or rotated their domains daily to obfuscate new content from indexing.  (New companies were intentionally hiding their content from search engines because they feared the global dominance of US search engines.) Some publishers transitioned to dynamic pages, meaning that a web page was not a static indexable entity but an assembled collage of content sources served in a single view at time of page load.  Finally, the seat of relevancy for new content couldn’t depend on an algorithm that took weeks to discover new signals of relevancy by waiting for publishers to cross-link to them.  A Blogspot or Wordpress author proved to be just as likely to be an authority on a granular topics as an established site that had high popularity rating in Alexa and Comscore, popular site ranking services.  For any trending real-time topic, it might be difficult for a conventional search engine to come up with a means to designate rank-authority from an algorithmic perspective.  New search technologies were needed.


User-centric data and indexing
With the user-centric publishing trend (along with encrypted data exchange mechanisms for easy server networking to transfer data within logged states) many companies that hosted user-publishing such as Blogger, Wordpress, Facebook, MySpace, Twitter, enabled open data access via web protocols like RSS, ATOM and RestAPI feeds.   And because an account-authentication token could be passed at the time of query, it allowed users to have a highly personalized view of the real-time web.  Bing, Google, Yahoo! experimented with their own personalized search services during this period.  


Real-time Search concepts
This open-data trend allowed a new algorithmic search engines to emerge that did not need to build complex offline indexes of the former large-publisher world.  In fact, the search engines accessing social mentions of mainstream publisher content became a secondary signal of user interest and attention that mapped the “Web 1.0” web as effectively as Pagerank, but made it discoverable faster than traditional search engines.  The unique advantage of these new approaches was that they were able to combine interest breadth signals across the global audience regarding the subject matter, together with its impact amplitude and duration on that audience over time.  (Visualize social media impact signals across these three dimensions and you'll grasp the significance of this in algorithmic analysis terms.) This is because the the nature of microblog sites enabled the capture of reactions at the moment of discovery by readers, across disparate feedback channels.  

Because of the brevity and immediacy of sharing that was taking off on Twitter specifically, and because of its inherently public nature, it became the focus of the data mining of real-time social signals.  (Friendster, MySpace, Yahoo! 360 and Facebook were more follower and private-circle oriented.)  Early social search start-ups Summize and Topsy introduced means to rapidly search the public corpus of Twitter content leveraging the symbols users invented to demarcate identity handles (@...) from conversation threads (#...) in tweets.  Topsy focused on who was sharing certain topics, and the authority of the source.  Summize focused on "who" was sharing and “what” that was shared.   Tweetdeck introduced a tool to sift multiple threads in parallel based on these topic and author threads.  Collecta introduced means to sift the broad corpus of real-time media including and beyond Twitter using XMPP sift by subject matter in an ephemeral and immediate relevancy analysis.  Google launched “Google Realtime” to add social media queries to its traditional web content search.  Bing, enabled users to connect to a Facebook account to see content that a user’s friends had shared on that social network.


Social Sharing as currency of the moment
One advantage that social networks (like Facebook, Twitter, Google+, Ning, LiveJournal, Nextdoor) have in terms of understanding the real-time web, is the nature of the public act of sharing.  When a social media user makes a public or limited-circle share of a piece of web content, this is a vote of interest for the specific piece of content.  Contrast this to the Pagerank vote a publisher makes when they embed an anchor-text link.  Every user in the new schema is elevated to the level of attention-measuring that Pagerank had previously attributed only to webmasters.  Think also how the unique distributed feedback system creates the ultimate anti-spam data point for algorithmic engines.  It’s fairly easy to spam anchor text links these days.  It’s very difficult to simulate 100,000 distributed people reacting to a piece of web content.  When a piece of content is tweeted, liked or +1’ed by 1000 or more users who are not connected to each other within a given social network, that can be taken as an unbiased popularity/legitimacy measure.


Closed data, siloes- The web's current challenge for the next generation's search engines
While this data is very valuable for the social network itself, to tailor its own engagement approaches for users, it also of value outside the network.  However, these days the data tends to be coveted by the platform owners, for good reason.  There are privacy concerns of users of course, who you want to encourage to use the system with the privacy settings they expect around their shared content.  And also companies like to keep their internal network data closed because it can unleash future economic potential in the form of relationships with marketers.  


The Twitter APIs, as an example, generated a particularly interesting signal from an external perspective because of the public-viewability of any single post.  Most user's understand that “tweets” were inherently public.  This is why so many developers launched tools to process and digest the feed.  When Twitter shut down the API access to Google and other web crawlers, there arose a need for Google to have it’s own microblog feature to replicate this real-time signal.  Twitter needed to shut down access to the broader web to prepare for its IPO, shareholder sense of ownership and upcoming in-network advertising platforms.  But it's success as an open platform tool shows the promise for future entrants to replicate the strategy of its early days of openness.


A real-time barometer of what is popular with Internet users is an invaluable tool for refining any search engine.  Keeping up with the rapid evolving nature of the web requires more unconstrained sources for showing interest and relevancy-signals as a feedback mechanism to web publishers.  Also web publishers of tomorrow need more ways to get their content published to new audiences.  As the web is a dynamically growing entity, the opening up of social media services to allow users to publish more feedback on the web in public view is a promising step for the future of other search platforms.  

Though we face a highly siloed web ecosystem now.  Continued drive to revive openness is a boon to the tools that will emerge in coming years.



(Perspective of the author: I worked at LookSmart, Overture, Yahoo! and Collecta over a span of 1998 to 2010.  So these industry shifts were witnessed from the perspective of someone representing these companies through the shift in the technologies available to us.)

 

Saturday, December 22, 2012

Business relationships in Japan

My team at BTLookSmart Japan
I had the pleasure of launching a business in Japan in 1999.*  It was one of the early entrants in the business of search engine syndication, an industry that used to be dominated by Altavista, Inktomi, Fast Search and Transfer, and is now dominated by Google and Bing after many iterations in the business model.

It was one of the most educational phases of my career.  The Japanese partnerships I built showed me that much of what I'd been taught in university about Japanese business was wrong.  Japanese companies were not averse to partnering with foreign firms.  Japanese customers did not necessarily favor Japanese-made products over foreign products.  These myths were touted by Kodak against Fuji Film in their WTO complaints against Japan in general and Fuji specifically.  And when I was in university, I was spoon-fed some of these same myths by some representatives in the American congress who favored market-protectionism over enhanced global trade opportunities.

What I learned in college that proved very true was that Japanese business hinges on relationships even over profit.  Surely this is primarily what Kodak had difficulty with.  It's very difficult to sell in Japan if you don't invest a significant amount in the relationships that are essential to the fabric of society there.  Fuji Film had invested tremendous amounts in social relationship and brand.  It wasn't mere shelf space that determined Fuji's advantage in the market, it was market presence and people. 

One of my favorite memories in Japan was after my first year after launch of our search engine.  (The first year was tremendous fun as we slowly accumulated partnership after partnership by investing enormous amounts of time and resources building an exceptional Japanese product.  But for the point of this story, I'll defer that for a different blog post.) By 2001 We had a large number of leading Japanese portals powered by our search products, even with a new entrance by Google. 

The time for this story was the beginning of our second year in market, time for our contractual renewals.  (Contracts auto-renewed by mutual consent.)  I was visiting each partner to gather their feedback on how we could make our services better and ask each if they would like to continue working with us.  One of them informed me that they'd received a bid of 1$Million guaranteed payment to switch from our search engine to another.  I told them that I was unable to match the offer, as our business model was based on indexing services, not guaranteed payments.  I thanked them for being such a good partner over the first year of our business there.  I prepared to wrap up the meeting because I believed my not offering to match the bid would leave us at an awkward impasse.

My counterpart smiled at me and said, (translated) "We're not going to accept it!  After all, they never pick up the phone when we call them.  You do.  And you come to visit us and listen to our feedback.  That's more valuable to us."  I was completely stunned, flattered and honored that my partner considered my service to be worth more than a $1 Million buyout check.  We renewed and they remained our partner until I left the company.

I've tried to dispel myths about Japanese culture when I've heard them and tried to encourage businesses to seek partnerships and distributions in Japan in spite of the difficulties Americans tend to anticipate there.  And I highly encourage anyone who is interested in learning about Japanese business and trade opportunities to explore the resources at the Japan External Trade Organization.  http://www.jetro.go.jp/  Japan is of course one of the largest trading partners with the United States.  But it is also one of the most enjoyable places I've ever done business.




*The company I launched in Japan was a branch of the global BTLookSmart brand.  BTLookSmart was a joint venture between British Telecom and Looksmart Ltd. to distribute web directories and web search on a model similar to Yahoo!'s destination portal model, making it affordable to have site search that was not powered by one of the global algorithmic search providers.  BTLookSmart Japan Kabushiki Gaisha achieved 70% market reach before it was acquired by ValueCommerce Japan, which was 50% owned by Yahoo! Japan.  The Looksmart Japan tools our team built were integrated into Yahoo! Japan's "MatchSmart" content advertising engine.  Subsequently Yahoo! Japan acquired all assets of ValueCommerce Japan.  After my departure from BTLookSmart Japan, I joined Overture Services, which was acquired by Yahoo! Inc.  I conducted business development across Yahoo! Asia and returned to Japan to work on mobile advertising with Overture Japan prior to the sale of Overture Japan to Yahoo! Japan.  Never a dull moment in internet business.

Sunday, November 4, 2012

Nurturing the first wind to get to the second

There are many good reasons not to run.  But if you decide to run anyway, there are still issues that can get in your way.  I'll write this to address some of the ways that I've found to optimize my run and make it quite enjoyable.  As a rower friend of mine, Ken Clemmer, commented that rowing is the art of knowing what not to do as you strive for efficient motion.  Running is simple enough.  But there are some habits than can make running difficult or tedious.  I hope some of this is valuable.

I took up running when I was a agile teen thanks to the inspiration of my high school track team and friends.  Over the years, thanks to genetics, exercise and nutrition, I have bulked up to an ungainly 6 foot tall stature.  But as the nimbler sports seemed to recede from me, running stayed a constant.  (I love rock climbing, but find it very challenging due to my weight to muscle ratio.)  My biggest hurdle in running has been managing asthma.  Asthma is caused by highly allergic bronchioles that swell in reaction to pollutants and pollen, often complicated by secondary factors as the lungs try to flush out the irritants with fluid.  Exercise can exacerbate asthma symptoms for various reasons including cortisol release and abnormal breathing patterns.  But exercise can also improve asthma management if done in a low stress and controlled fashion.  It's my personal belief that running reduces dependence on medications and can lead to expanded lung capacity.  Before running, it's important, if you're asthmatic, to get your allergies under control.  Don't run if it's going to result in needing the use of bronchodilators.  Excessive use of epinephrine (Primatene Mist) albuterol (Ventolin etc.) can lead to your lungs permanently altering their shape due to the manner of shallow breathing that asthmatics can fall into.   

I like to run with minimal visual or social distraction.  Forests are great if you are near to one.  But cities are filled with distractions and dangers which make excessive momentum a bad thing.  Indoor running can minimize the impact of pollen, as well as the risk of being injured by moving vehicles or territorial animals.  When on treadmills, I avoid mirrors and TV screens.  You want to sink into the moment in a run, rather than be casting about for visual stimulus or being overly self-conscious about getting sweaty and exhausted in front of others.

Music can be an excellent tool for focus in a run.  I also find running to be a great mental state for appreciating subtleties of my favorite music.  So if you live in a country where you can gain access to a musical device and a library of good music, I recommend experimenting with different songs that have BPM (beats per minute) that you can pace your run to.  Slower music may compel you to pace your stride longer, faster BPM may compel you to run in a manic or fast manner.  So it's good to experiment with music that suits your desired pace and body size. 

I think of three forces of forward propulsion in my run.
1) Desire to run.  It's easier for me to have a good run in the mornings.  If you're not in a good mental state, your run can be sloppy and unrewarding.  It's better not to run at all than to force it and have bad associations with your run that are actually due to something else.  A good mood is enhanced by a run.  A bad mood seldom is.

2) The amount of sugars in your bloodstream.  Make sure that you're well nourished, but not bloated by a meal.  The previous day's meal is probably going to give you the energy you need for today's exercise.  I think it's great to have juice before a run.  Your body isn't going to be doing any meaningful digestion during a run, so complex carbohydrates aren't going to improve your energy much if consumed immediately prior to the run.

3) The amount of oxygen you can keep in the bloodstream to rapidly metabolize those sugars.  This is the biggest factor for me.  So I'll go into extensive detail about it.  This is the core of my running experience.  The first two factors are important.  But breath is the most critical. 

To optimize breath, there are a few things you can consider.  I always start a run slowly, pacing my breath evenly with my stride.  Typically two paces for every in-breath, and two for every out-breath.  Starting a run fast can deplete the oxygen in your blood rapidly and set up a pattern of breathing to compensate that will quickly exhaust you as you hyperventilate to compensate for the rapid surge of speed.  Long ago I learned the trick of pursing your lips to blow air at pressure like a silent whistle.  This increases the pressure in your lungs.  At higher pressures, hemoglobin in your blood will take in Oxygen at a higher rate.  (This also can help with hypoxia when you are mountain climbing.)  If you feel momentarily short of breath, in addition to altering your stride or pace, try pursed-lipped out-breaths until you feel your blood oxygen level is balanced.

If you've ever wondered why breathing too much or too fast could trigger bronchiole constriction, I recommend some reading on Ukrainian doctor Konstantin Pavlovich Buteyko's theories.  His concepts in general are based on the idea that hyperventilating causes imbalance of gas intake to the bloodstream, other than oxygen.  Constriction of the bronchioles by his theory is an adaptive reaction to limit the imbalance in the bloodstream.  While I'm not a practitioner of formal methods of Buteyko, this concept has helped me understand a bit better why the higher-lung panting, that chronic asthmatics typically do, actually exacerbates symptoms in an "asthma attack". 

To avoid breathing too rapidly when running, focus on the out-breath.  Asthmatics tend to focus on breathing in, rather than on breathing out.  While these would seem equivalent.  Altering the focus can result in deeper breathing patterns and using your full lung capacity, which in turn leads to slow deep breath rather than manic panting.  When running, if you suddenly feel the "itch" feeling of bronchiole constriction, slow/lengthen your pace, slow your breathing to longer out-breaths and then determine if you should continue.  Sudden stops can exacerbate symptoms too.  So it's good to view everything about exercise as gradual.

When your breathing is optimal in a run, you can unlock a tremendous amount of energy and endurance.  It typically takes a quarter or half a mile of running before my pace is optimized.  This is when the run gets really fun.  It's a feeling of being zoned-in.  Time seems to slow.  You'll be aware of the flow of your body in ways that you can make subtle adjustments.  After a mile or so, you may get a feeling of "second wind", as your run becomes very calm.  Your breathing may slow even more.  If you want to keep going, you can pay attention to a few other things that will make your running endurance better.

There are many good instructions on running stride.  I'm not much of an expert here.  What I focus on is having my head balanced like a golf ball on a tee, with little neck muscle involvement, and no throat constriction.  I tend to land slightly on the ball of the foot in my stride instead of the heel and avoid excessive upward motion or shock to the heel/knees.  I tend to relax my arms.  Holding them up can make me speed my stride.  Letting them hang lower lets me have a longer slower stride which I enjoy.  I give little thought to speed.  If I'm tread-milling, speed is only weighed against the blood sugar I feel.  High energy times I'll run faster with longer strides.  Low energy times I'll run slower with shorter strides.

A Buddhist abbess once commented in a Zen Center dharma talk that as your practice improves the inefficient things you do will fall away until there's nothing but the practice.  As I get deep into second wind, my run feels like this.  I'm able to breathe very calmly and my thoughts can sink into the music I am enjoying and the splendor of what it feels like to be a human.   (I regret I don't recall the name of the abbess who gave the talk.  http://sfzc.org/ was the venue.)


This article is dedicated to the memory of Robert Volberg.  Restauranteur extraordinaire.  Founder of Angeline's, Berkeley.  Robert died of an asthma attack June 23rd, 2010.

Wednesday, February 1, 2012

Joining the Mozilla BrowserID Initiative

Have you ever had trouble remembering the login to one of your dozen or so online accounts?  Or have you ever had confusion over which user is logged into a given webmail or social network on a shared computer?

Mozilla Firefox is rolling out a new browser-based initiative to permit the storing of all IDs associated with one single login identity.  You can think of it as a single key to your online identities instead of a keychain of dozens of unique keys.  BrowserID will be even more secure than the complex conventional manner of site login management as the user will be able to centrally control all accounts associated with your personal online content.

Over the coming months I will have the privilege of working with their engineers to bring this platform to a broader audience through partnerships with leading browser,  email and social network providers across the web. 

BrowserID will enable you to toggle between professional and personal online accounts with ease and make it much simpler for families to be able to share computers without confusion resulting from users having different browser cookies for different signed in users across multiple web properties.  Like the BMW being able to identify its driver when he/she approaches, the Firefox browser with BrowserID enabled will be able to effortlessly escort you to your frequented websites without the typical confusion of multiple cryptic logins.

From the company that popularized browser-based advertisement controls and tabbed browsing, a new advancement in web technology is about to unfold.  Web surfing is about to get faster and friendlier.

Stay tuned as we roll out more exciting features of this product to a browser or website near you.

Wednesday, January 4, 2012

EEG hats for everyone

NeuroSky Chip Toy
There are a few interesting companies developing "Brain Computer Interfaces" for toys and digital devices.  These devices read electrical fields above your scalp that indicate activity happening on the inside of your skull.  Though the devices can't capture thoughts, they can signal what regions of your brain are active at any given moment.  What this means is that the skin of your head can be used in lieu hand gestures, replacing a keyboard, mouse or joystick input.

Developers should care about this because two of these companies are seeking our help, in that they are inviting us to code applications to leverage their consumer headsets.  My colleagues and I have been testing the different tools to see if they can socket into mobile apps for use in stress management or mobile gaming.  These tools are currently in use in the medical field for those who lack the ability to leverage conventional computer interfaces.  The question is whether these tools will ever supersede the hand (keyboard/mouse/gesture), or the tongue (Siri/DragonDictate/GoogleVoiceSearch) for interfacing with computers.  

Force Trainer - Uncle Milton
NeuroSky is the price performer in consumer electronics so far.  ($40-$70)  Devices have been mass produced with Mattel and Uncle Milton Toys to bring these to households in the US market with a very basic single command that comes from the cerebral cortex and (perhaps) the frontal lobe.  The significant benefit of NeuroSky's chips and sensors is that they are dry-contact, not requiring the wet or gel contacts used in medical-grade EEG.  The left forehead contact is the point that is supposed to affect the toy, elevating a ping-pong ball when the mind is still in concentration.

NeuroSky MindFlex Toy
NeuroSky claims to be coming out with a new headset similar to the toys released by Uncle Milton (above) and Mattel (left), but that will interface with your mobile applications instead of the hard-wired hard-coded toys previously released.  They are hosting meetups in Silicon Valley to work with developers on coming out with the first round of apps that will interface with these headsets.  So stay tuned on that front.  One thing they tell us though is that there will still only be the on/off command structure of the frontal lobe input.  So don't expect to do right/left or complex motor interpretation such as a spatial game.

A very attractive aspect of NeuroSky products is that they are already bluetooth based.  So the player doesn't need to worry about wires.  This can give the user the illusion of some kind of telepathy, which the presence of wires might minimize. Also, the dry-contact sensors overcome the potential consumer adoption hurdle that medical grade EEG gel contacts would encounter. 

Our experience with the NeuroSky chip is that it is very easy to set up but challenging to manipulate.  The process of control is to occupy the mind with a focused thought for a steady period to keep the contacts in the headset sensing a continual steady signal.  High mental activity causes the ping-pong ball to drop and not move.  And, allegedly, a still but focused mind causes the steady signal necessary to complete the circuits and turn the lights/fan on.

Epoc Headset and USB
The more versatile consumer headset is the Epoc headset from Emotiv.  ($299) This one has 16 sensors across the top of the scalp.  So it is able to pick up points that can reflect complex motor thoughts such as right/left/forward/backward.  In addition they claim to capture some emotive states and even facial expressions.

The advantage of the Epoc is that it is already open to developers, interfacing to your computer through a USB key.  They are soliciting all developers to start filling out their proprietary app store (iTunes model) for games and tools that other consumers will be able to use with their own Epoc headsets in the future.  So we have the opportunity to start coding for a headset that is already pretty close to medical grade.

Currently, developers need to code their apps in Windows only for the PC platform.  In the future, we may be able to use this tool for Apple and mobile operating systems as well.  But there has been no promises on this from Emotiv.

Epoc Touchpoint Scan
The disadvantage of Epoc is that you do need to use wet-contacts to the scalp in order to pick up the electrical signals.  It's unlikely that consumers will be willing to re-apply the saline solution for each sitting of their EEG games.  But, this is what we have to work with at this point.  The process of training for the Epoc is a very gradual pairing of discoverable point contact combinations and specific output commands the user wants to exert on the Epoc-compatible game or tool. 

There isn't much point of consumers purchasing the Epoc at this point because the developer community hasn't yet produced a broad range of tools or games for exploration. 

NIA Headset & CPU
I would like to give an honorable mention to the NIA (Neural Impulse Actuator) headset from OCZ.  It's "honorable" only because it's no longer in the race, as OCZ has discontinued manufacture of the product.  However, they were able to develop quite sophisticated software for the PC interface, produce a small CPU to read and interpret the input signals, and manufacture the headset for under $100. The disadvantage of the NIA was that it read only three point contacts across the forehead, and would also pick up electrical signals generated by the muscular motion of the eyebrows.

I really appreciated the ambitious scope of the NIA software to capture right/left commands that could be mapped by the user to actual USB and keyboard keystrokes used in game control.  If it didn't have the requirement of needing to be calibrated in Windows, in theory this would enable an EEG control for xBox, Playstation, or any other device that accepted the market-standard and platform-agnostic USB input.

NIA direct USB Input
A common critique of the brain computer interface products is that they are complex to start using.  I'd have to say that the NeuroSky products are the easiest to use out of the box.  (Both the MindFlex and Force Trainer were up and running in less than a minute after battery installation.)  Epoc and Nia take multiple steps to set up and quite a long process to calibrate to the user.

All these devices require a learning curve as the user gets familiar with the idea of sending signals to a machine from a part of the body that tends to be largely passive.  In the distant future the thought of interfacing with a new device through the skin on our scalp will take only as long as one's first interaction with a touch screen.  But now, watching users wince and squint as they try to flex their brains with these devices shows the inherent foreignness of the concept to mainstream consumers.

Thursday, December 29, 2011

Air-Wire Device Pairing for Games

I'd posted back in June about opportunities and approaches in pairing between mobile devices as smart phones and tablets proliferate.  ncubeeight has teamed up with ViSSee computer vision of Switzerland to create a new US joint venture, Air-Wire, to develop paired-device infrastructure tools for game developers interested in creating more robust and immersive gaming experiences.

Through the iPhone App Store, Apple has re-invented and expanded the shareware model of the 1990s.  But whereas shareware depended on free software to all with a small percentage of customers contributing toward the development costs, Apple's App Store permitted a more lucrative model, where every customer chips in a little bit, creating a boom in scalability for the independent developer community. 

Now that many consumers have multiple smart devices (iPhone, iPad, Macintosh Computer, AppleTV) in their household, new multiple device interaction can be used by one consumer through wireless pairing of devices used in a single task.

The benefit of the ViSSee computer vision tools is that a mobile phone is now able to capture gestures beyond touch through input from the embedded camera, interpreted by the native device CPU.  (central processing unit)  Air-Wire's products will permit an iPhone to be used as a joy-stick or input mechanism for a game running on a separate device, be it another iPhone, iPad, Computer, TV or utility-connected device.

Microsoft Kinect and Nintendo Wii have have pioneered infrared-based peripherals for remote input tracking to replace the mouse, trackball or stylus (which were abstracted controls for gaming computers) with more intuitive body movement tracking based on natural body motion.  Now that many "smart devices" such as mobile phones contain both a camera and CPU of their own, they can render intelligible messages to a remote computer as preprocessed input commands without needing the infrared.

Air-Wire's infrastructure tools will, for example, permit driving games to detect foot position for input commands such as braking and accelerating that the player uses to control the tablet-hosted game while using the tablet itself as a steering wheel, in turn projecting the screen of the game play through Apple's Air Play to an external screen.

As device-pairing opportunities expand with the distribution of tablet, mobile, and clothing accessory remote chips like the Jawbone "UP" wristband, more market opportunities open up for developers.  And we'll have more to show you.