Wednesday, January 4, 2012

EEG hats for everyone

NeuroSky Chip Toy
There are a few interesting companies developing "Brain Computer Interfaces" for toys and digital devices.  These devices read electrical fields above your scalp that indicate activity happening on the inside of your skull.  Though the devices can't capture thoughts, they can signal what regions of your brain are active at any given moment.  What this means is that the skin of your head can be used in lieu hand gestures, replacing a keyboard, mouse or joystick input.

Developers should care about this because two of these companies are seeking our help, in that they are inviting us to code applications to leverage their consumer headsets.  My colleagues and I have been testing the different tools to see if they can socket into mobile apps for use in stress management or mobile gaming.  These tools are currently in use in the medical field for those who lack the ability to leverage conventional computer interfaces.  The question is whether these tools will ever supersede the hand (keyboard/mouse/gesture), or the tongue (Siri/DragonDictate/GoogleVoiceSearch) for interfacing with computers.  

Force Trainer - Uncle Milton
NeuroSky is the price performer in consumer electronics so far.  ($40-$70)  Devices have been mass produced with Mattel and Uncle Milton Toys to bring these to households in the US market with a very basic single command that comes from the cerebral cortex and (perhaps) the frontal lobe.  The significant benefit of NeuroSky's chips and sensors is that they are dry-contact, not requiring the wet or gel contacts used in medical-grade EEG.  The left forehead contact is the point that is supposed to affect the toy, elevating a ping-pong ball when the mind is still in concentration.

NeuroSky MindFlex Toy
NeuroSky claims to be coming out with a new headset similar to the toys released by Uncle Milton (above) and Mattel (left), but that will interface with your mobile applications instead of the hard-wired hard-coded toys previously released.  They are hosting meetups in Silicon Valley to work with developers on coming out with the first round of apps that will interface with these headsets.  So stay tuned on that front.  One thing they tell us though is that there will still only be the on/off command structure of the frontal lobe input.  So don't expect to do right/left or complex motor interpretation such as a spatial game.

A very attractive aspect of NeuroSky products is that they are already bluetooth based.  So the player doesn't need to worry about wires.  This can give the user the illusion of some kind of telepathy, which the presence of wires might minimize. Also, the dry-contact sensors overcome the potential consumer adoption hurdle that medical grade EEG gel contacts would encounter. 

Our experience with the NeuroSky chip is that it is very easy to set up but challenging to manipulate.  The process of control is to occupy the mind with a focused thought for a steady period to keep the contacts in the headset sensing a continual steady signal.  High mental activity causes the ping-pong ball to drop and not move.  And, allegedly, a still but focused mind causes the steady signal necessary to complete the circuits and turn the lights/fan on.

Epoc Headset and USB
The more versatile consumer headset is the Epoc headset from Emotiv.  ($299) This one has 16 sensors across the top of the scalp.  So it is able to pick up points that can reflect complex motor thoughts such as right/left/forward/backward.  In addition they claim to capture some emotive states and even facial expressions.

The advantage of the Epoc is that it is already open to developers, interfacing to your computer through a USB key.  They are soliciting all developers to start filling out their proprietary app store (iTunes model) for games and tools that other consumers will be able to use with their own Epoc headsets in the future.  So we have the opportunity to start coding for a headset that is already pretty close to medical grade.

Currently, developers need to code their apps in Windows only for the PC platform.  In the future, we may be able to use this tool for Apple and mobile operating systems as well.  But there has been no promises on this from Emotiv.

Epoc Touchpoint Scan
The disadvantage of Epoc is that you do need to use wet-contacts to the scalp in order to pick up the electrical signals.  It's unlikely that consumers will be willing to re-apply the saline solution for each sitting of their EEG games.  But, this is what we have to work with at this point.  The process of training for the Epoc is a very gradual pairing of discoverable point contact combinations and specific output commands the user wants to exert on the Epoc-compatible game or tool. 

There isn't much point of consumers purchasing the Epoc at this point because the developer community hasn't yet produced a broad range of tools or games for exploration. 

NIA Headset & CPU
I would like to give an honorable mention to the NIA (Neural Impulse Actuator) headset from OCZ.  It's "honorable" only because it's no longer in the race, as OCZ has discontinued manufacture of the product.  However, they were able to develop quite sophisticated software for the PC interface, produce a small CPU to read and interpret the input signals, and manufacture the headset for under $100. The disadvantage of the NIA was that it read only three point contacts across the forehead, and would also pick up electrical signals generated by the muscular motion of the eyebrows.

I really appreciated the ambitious scope of the NIA software to capture right/left commands that could be mapped by the user to actual USB and keyboard keystrokes used in game control.  If it didn't have the requirement of needing to be calibrated in Windows, in theory this would enable an EEG control for xBox, Playstation, or any other device that accepted the market-standard and platform-agnostic USB input.

NIA direct USB Input
A common critique of the brain computer interface products is that they are complex to start using.  I'd have to say that the NeuroSky products are the easiest to use out of the box.  (Both the MindFlex and Force Trainer were up and running in less than a minute after battery installation.)  Epoc and Nia take multiple steps to set up and quite a long process to calibrate to the user.

All these devices require a learning curve as the user gets familiar with the idea of sending signals to a machine from a part of the body that tends to be largely passive.  In the distant future the thought of interfacing with a new device through the skin on our scalp will take only as long as one's first interaction with a touch screen.  But now, watching users wince and squint as they try to flex their brains with these devices shows the inherent foreignness of the concept to mainstream consumers.

Thursday, December 29, 2011

Air-Wire Device Pairing for Games

I'd posted back in June about opportunities and approaches in pairing between mobile devices as smart phones and tablets proliferate.  ncubeeight has teamed up with ViSSee computer vision of Switzerland to create a new US joint venture, Air-Wire, to develop paired-device infrastructure tools for game developers interested in creating more robust and immersive gaming experiences.

Through the iPhone App Store, Apple has re-invented and expanded the shareware model of the 1990s.  But whereas shareware depended on free software to all with a small percentage of customers contributing toward the development costs, Apple's App Store permitted a more lucrative model, where every customer chips in a little bit, creating a boom in scalability for the independent developer community. 

Now that many consumers have multiple smart devices (iPhone, iPad, Macintosh Computer, AppleTV) in their household, new multiple device interaction can be used by one consumer through wireless pairing of devices used in a single task.

The benefit of the ViSSee computer vision tools is that a mobile phone is now able to capture gestures beyond touch through input from the embedded camera, interpreted by the native device CPU.  (central processing unit)  Air-Wire's products will permit an iPhone to be used as a joy-stick or input mechanism for a game running on a separate device, be it another iPhone, iPad, Computer, TV or utility-connected device.

Microsoft Kinect and Nintendo Wii have have pioneered infrared-based peripherals for remote input tracking to replace the mouse, trackball or stylus (which were abstracted controls for gaming computers) with more intuitive body movement tracking based on natural body motion.  Now that many "smart devices" such as mobile phones contain both a camera and CPU of their own, they can render intelligible messages to a remote computer as preprocessed input commands without needing the infrared.

Air-Wire's infrastructure tools will, for example, permit driving games to detect foot position for input commands such as braking and accelerating that the player uses to control the tablet-hosted game while using the tablet itself as a steering wheel, in turn projecting the screen of the game play through Apple's Air Play to an external screen.

As device-pairing opportunities expand with the distribution of tablet, mobile, and clothing accessory remote chips like the Jawbone "UP" wristband, more market opportunities open up for developers.  And we'll have more to show you.

Wednesday, September 7, 2011

Apple's foray into washable electronics

Mark Weiser wrote in Scientific American article "The Computer for the 21st Century" an insight I've often thought on:  "The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it."

I've been thinking a lot about the legacy of Steve Jobs and his Apple computer after the announcement of his stepping down from his CEO role to become Chairman of the Apple Board of Directors.  Apple's products have significantly shaped my life over the past two decades, starting with my first Mac, an SE30 that I bought in 1987.  Since then, I've bought over twenty Apple products and over the past two years worked in the app development space. 

I'm of course very grateful for the way Apple products have inspired me through my educational years, in my career as an inventor and businessman, as well as in my avocation as a composer.  What Apple has done in it's app marketplace is a radical reinvention of the shareware model of software development, enabling everyone to pay some small amount instead of only 1-2% as in shareware's previous incarnation.  Apple has made it possible for small businesses like mine to pursue their passion for invention, making small projects profitable like never before.

Apple has changed my life significantly for the better.  That's a big statement.  But I'd like to comment on a small life vignette today.  When American car manufacturers turned to the Japanese auto industry to understand how to improve their own manufacturing process, they found that it was the attention to incredibly small detail that led to the success of the Japanese manufacture process.  Apple has always done that.  What always amazed me with Apple products is that new computers would be packed with features their future users wouldn't even know they could ask for, or even want.  Speech recognition, games, full movie editing suites, music composition suites.  Apple just put them in, knowing that users would become creators if they had the software without the hurdle of having to buy it.  That meant Apple computers were more expensive than Windows husks.  But their users would find themselves doing more with them.

There is one particular detail I just learned about Apple's touch-screen "Nano" iPod recently.  Like Mark Weiser's ideal computers, that disappear, the shuffle has a propensity of getting lost quite easily.  Ask me, I just washed mine on the warm cycle with my gym clothes.  The surprising thing is, on a whim, I plugged it into my clock radio.  It immediately was able to play songs from memory!  The water of course had drained the battery by shorting the exposed charging contacts.  But after a day or so had fully recharged and the screen lit up.  What kind of company makes washable electronics products?  I'm not surprised it's Apple. 

Surely Apple doesn't tout that its products are waterproof.  They don't tout a lot of features that they include free of charge.  But it sure is a delight to find new benefits when you don't expect them.  I do not suggest you try to wash your shuffle deliberately.  (I'm not planning to ever repeat this experiment.)  And if you ever do wash your iPod Nano or iPod Shuffle, and you want to recharge it, it's best to attach it to a USB port or into the charging station of a clock radio rather than a socket charger which could be risky in the case of an actual short circuit if the casing has been compromised.

So I must thank Apple again for surprising me with the water-proofedness of my iPod Nano.  It's always the small details they pay attention to that make Apple users so cultishly loyal to their brand.  As their technology "disappears" into the fabric of your everyday laundry, it helps for it to be wash-and-wear ready.

Saturday, July 2, 2011

My phone is self-aware

My Egypt trip including Abu Simbel, Aswan, Luxor, Cairo
In early April I wrote of the need to utilize our mobile phone technology to address use cases ranging from personal navigation issues to large scale disaster response.  On the small scale, my personal scenario involved getting lost on a mountain top after sunset in Jordan (wanting to retrace my steps with my iPhone).  On the large scale, thousands of souls were lost on the coast of Tohoku, Japan when a tsunami struck in March, their loved ones not knowing where they were, even though many of them likely were carrying cell phones that could have been tracking their position, if there'd been an app for that.

Subsequently, several crafty developers uncovered a location file that resides on iOS4 Apple phones.  Unfortunately, there aren't ways to access this data in real time from the personal perspective, or from a server backup from a broader perspective.  However, at the O'Reilly "Where 2.0 Conference", Pete Warden and Alasdair Allan demonstrated and published a program called iPhone Tracker for individuals to view their own device-embedded data.  I recommend everyone try this if you're an iPhone user. 
My motion through Cairo over a five day period

Using the iPhone Tracker application on my Mac, I was able to extract data my iPhone had captured during my trip through the Middle East.  Upper left you can see my tracks from Egypt's southern-most town of Abu Simbel, along the Nile through Aswan and Luxor (Old Thebes) and on to Cairo.  Zooming into detail on my Cairo stay, at right, you can see the day I spent at the Cairo Museum and Tahrir Square overlapped with the day I spent in the historic Islamic Cairo quarter, then two separate days for the visit to Coptic Cairo and the Citadel Mosque built by Salah ad-Din atop the hill to the south of Cairo to defend from crusaders.  The far right cluster of points is my last few hours at the airport before departure. 

Since the release of the iPhone Tracker for computers, the research and development team at the New York Times has launched a new web service to allow any user to upload their personal location history file to the web for backup in case Apple discontinues or alters the format of the location file.  You can do a similar rendering of your data like the maps above by visiting their service www.openpaths.cc  

According to Apple he location file on iPhones is meant to keep a history of phone connectivity through cell towers or wi-fi hot spots as reference points to quickly render map data, not for actual tracking of a user.  (The data stays on the phone and is not transmitted externally.)  However, I see incredible value in an app that actually does transmit location information to an external server for the use of location-backtracking as I needed while in Jordan, or as relatives need to check in on the location of their loved ones in emergencies.  As the technology is now prevalent to do this, it just takes the will and the time to make it so.

Stay tuned...

Friday, May 20, 2011

Future-scope

I was listening to the Apollo 11 logs sampled in the Austere composition "Principium Somniferum", recorded when the spacecraft departed Earth's gravitational field to be captured by the Moon's.  It's a profound composition inspired by the similarly profound story of men sling-shotting themselves in a bubble of metal away from the protection of Earth's physics toward a desolate chunk of rock based on an extreme faith in their mathematical calculations and belief that they'd packed adequate provisions for the journey. 

The transitional moment the Houston engineer spoke of reminded me of a concept my father often pondered and discussed.  Humans, being bipeds who evolved with their brains accustomed to living at walking or running speeds, have to adapt their thinking to the discontinuity of the way we travel today.  Our minds have a timeline of prediction that leads minutes, hours, days ahead of where we are at the present time.  When we make large leaps like from SFO to NYC, he says there is a point before we get to our airplane that our minds make the leap ahead of us.  For those moments after the mental leap, our bodies walk in old-space while our minds operate and plan in new-space.  We become spatially-spread zombies with head in one place and body in another.  (This may be why it's so disorienting to have our flights cancelled.  As it's virtual decapitation.)

These thoughts in turn make me think of how long it will take for our technology to catch up with our travel-bound heads.  Our location based services (LBS) are great at tracking the trail we've left behind.  They help us savor our last meal with lovingly posed spreads on Yelp, they help us scan the crowd at a venue we visit to see if the digital presence of our kin can be sniffed through the crowds on Foursquare/Gowalla.  The first pioneers of the space are using the mechanics of group interaction to capture shared intentions.  (Ditto, Plancast and Tripit see that social momentum of friendship can result in shared plans that otherwise might be solitary if not communicated.) 

The way we engage with the world through social interactions is perhaps the easiest way through current technology to make predictions about the future.  For example, we can tell if a 415 area code starts communicating intensely with 212's, more than all local interactions, that this phone's owner in travel-zombie state.  Tools like Waze crowdsourced maps can measure velocity to make aggregate predictions about the status of the location they are moving through and even of the phone owner themselves. 

We are soon to see a future-scope tool, perhaps first in mobile app form, that will help us make the "teleporting" leap from where we are to where we intend to be.  Perhaps we can flag to those around us a status message of "I'm not really here" so they behave courteously to our zombie bodies as the shift in reference is made to our new place of being.  Perhaps those in our new location will be able to sense our presence.  Our disembodied head of intent will manifest its presence before our physical body has to show up.  Our travel limbo will always be disorienting until body and mind can move at the same speeds.  Right now technology we experience is footprint-centric instead of intention centric. 

As we slingshot ourselves about in our metal bubbles with extreme faith, half present where we are, perhaps technology will help us be the slightest bit more aware.  Often, in travel, we could not be more absent.




For downloads of Austere's Principium Somniferum visit:
http://www.cdbaby.com/cd/freqmagnet
For Austere discography visit: http://www.discogs.com/artist/Austere

Quoted from Apollo 11 Mission logs:
"This is Apollo Control at 61 hours, 39 minutes. We've had no further conversation with the crew since our last report. Flight Surgeon says there is no indication at this time that they have begun to sleep, but we expect they'll be getting to sleep here shortly. Coming up in less than 10 seconds now, we'll be crossing into the sphere of influence of the Moon. A computational changeover will be made here in Mission Control at this point, as the Moon's gravitational force becomes the dominant effect on the spacecraft trajectory, and our displays will shift from Earth-reference to Moon-reference. At that point, which occurred a few seconds ago, the spacecraft was at a distance of 186,437 nautical miles [] from Earth, and 33,822 nautical miles [] from the Moon. The velocity with respect to the Earth was 2,990 feet per second [], and with respect to the Moon, about 3,272 feet per second []. The Passive Thermal Control mode that was set up for the second time by the crew appears to be holding well at this point, and all spacecraft systems are functioning normally. Mission going very smoothly. At 61 hours, 41 minutes; this is Apollo Control, Houston. " 

Source: http://history.nasa.gov/ap11fj/10day3-flight-plan-update.htm

Monday, April 4, 2011

Drive Into the Tsunami

This morning I was struck by the story of Susumu Sugawara on CNN who said that when he heard the tsunami sirens on Oshima, he jumped into his boat, riding into the oncoming wave to avoid losing his boat and risking his island's isolation in the aftermath.

The fact of his survival is miraculous.  The testament of his humble dedication to his community and his boat, to whom he said, "If we live or die, we'll be together," is profoundly touching.  (As he fled land he bade an apologetic farewell to all his other boats whom he could not save.)

It is amazing to think how many stories were lost in this tragedy.  I wondered this morning if we might be able to develop personal or device black-boxes (the way that all airplanes have to prove as record of what happened to them).  I say this not to be morbid as in the case of the posthumous/forensic case of airplanes. 

When I was in Jordan last month I irresponsibly hiked up a mountain late in the day.  When night fell faster than anticipated due to cloud cover, I was able to use my photographic history of my hike to piece together where I had been and the geology around me to help me find my way back to the road.  But if my phone had been dropping geotagged pins as I hiked, that would not have been necessary as I could have just retraced my steps.

App developers could take note of this use-case.  As it could be easily developed using Skyhook or other background geo-lookup tools already existing in most smartphones.  And perhaps in the future we'll all have devices with us that communicate actively to servers to state our last-known position and current well-being if we are ever trapped in earthquake debris needing timely help.

Saturday, February 12, 2011

If someone tweets in rural Haiti, and they're using a feature phone, does anyone read it?

During the recent Tsunami in Japan, the internet infrastructure upon which Twitter and Facebook (including their mobile equivalents) are based, was taken offline in eastern Japan near the epicenter of the earthquake.  However, social networks that were built on the widely distributed feature phones in Japan continued to transmit messages over GPRS (General Packet Radio Service).  As a result, 80% of short message communication during the disaster was made on the social network maintained by Gree, a leading social network which is embedded on these feature phones distributed in Japan. *

This brought me to realize that other internet initiatives in markets dominated by feature phones could leverage similar approaches to get communities onto the web grid.  Most networks in the developing world are feature phone-dominated.  Though they may lack the GPRS network of Japan, they do have the ability to distribute bundled apps pre-loaded into the widely distributed low cost mobile handsets.  Though these networks are mostly dominated by voice and SMS messaging, there is a potential to use data hubs that would synchronize with web-based servers to deliver some compelling internet based applications in these markets.  (For an example of this SMS-based concept see Mobile-XL)

Last year I had the opportunity to consider this problem with Random Hacks of Kindness which hosted a hackathon around the United Nations Global Pulse initiative.   Our challenge was to consider how current internet technology could reach markets like rural regions of Haiti post-quake for monitoring and dispatching disaster-relief initiatives.  The motive was to enable commercial tool sets run by for-profit businesses like Twitter and Facebook to be used in markets currently beyond their reach.  Naturally its easy for the most privileged in any society to use social communication tools to reach out for help.  The voices that sometimes most need to be heard though are those without access to these tools.  If we find a way for communities that have access to feature phones to "get on the internet grid" by connecting SMS gateways to web servers that then render these messages into internet protocol, the for-profit community can go the rest of the way in developing the algorithms necessary to watch for trending signals that deserve attention from aid organizations. 

During South by Southwest Interactive convention Kate Schnepel of WildlifeSOS presented on how their organization is using a cumbersome workaround to just this problem.  Kartick Satyanarayan (pictured above with one of his rescued animals) is their main activist on the ground in India, often dispatched in parts of the country only accessible via voice and SMS communications.  He therefore sends updates from the field via SMS to someone with Internet connection who in turn tweets the update in real-time.  It's easy to see why it would be valuable for those in rural areas to have access to distributed SMS gateways that would obviate the need for this to be a two person task. 

Once we solve the hurdle of getting the signal to the web, which is purely technical, the matter of looking for signals from those in need of aid in the developing world can be addressed separately.  For example, the UN Global pulse hopes for a platform that could pick up mentions of the word "cholera" in a place that it has not been heard before which would allow its local branches to address the problem swiftly before it becomes a regional crisis.  This could be a simple signal amplification algorithm that analyses the linguistic landscape of chatter social/business communication for statistically uncommon signals.  If you apply tracking just to new phrases that come onto the scene, normalized for internet memes and news topics, then pay particular attention to those that spread the way diseases or word of disasters might, the UN Aid organizations and NGOs should then be able to respond to the crisis in a way that could prevent lasting damage to the community.

The Gree model of feature phone social networking applications on widely distributed devices, or the Mobile-XL method of providing SMS gateways to the web, may be just what under-served markets need to bring the boon of social media platforms to all regions of the globe.  As the popular revolutions in the Middle East have proven, it is crucial to have access to these advanced tools to bring attention and aid to areas of need.  Tunisia and Egypt had the benefit of these tools to amplify a signal that might otherwise have been mute to those outside their borders.  More people in the world can benefit from these amplification platforms.  The hurdle to bring it to them is not prohibitive.

*Presentation by Eiji Araki, VP of Product, Gree International speaking at the Japan Mobile Leaders Forum
http://schedule.sxsw.com/events/event_IAP8378