Sunday, January 5, 2025

Resurrecting the third dimension from the second

I remember my first time witnessing a hologram. In my case it was the Haunted Mansion ride at Disney World when I was six. As the slow-moving roller coaster progressed, we peered down on a vast ballroom beneath us with dozens of ghostly figures appearing to dance in front of the physical furniture that adorned the room.

Haunted Mansion Walt Disney World
Walt Disney World - Ghosts in the Haunted Ballroom

I asked my brother to explain how the illusion worked. He had a book on optical illusions that we would pore over, fascinated and bewildered. How does that still image appear to be moving, I wondered? I could see the two distinct parts of the illusion if I covered either of my eyes. But it appeared to come alive and move with depth when seen with both eyes open as my brain assembled the parts into a synthesized whole. In elementary school, I went to a science museum to learn more about holographic film. Holographs are easy to create in static film. It’s the perspective of having two eyes with an extrapolated sense of spatial location of what we see with each eye that creates the illusion of depth between two contrasting images or angles of a single image. 

In Disney’s example, a large wall of holographic film stays still while the audience moves by it. Each person's shifting perspective as the ride moves forward reveals a different view through the film. Though all the patrons are seated in a shared audience position, the image they each see is a different stage of motion of the ghostly characters as they pass by the static holographic film. The motion of the roller coaster at each moment in time is what animates the scene. Disney had created an industry of moving film in front of people who sit still. In this illusion, the Disney team had inverted the two. They animated the motion of the audience, while the projection remained still. This form of holography is very expensive, but it makes sense at scale, like an amusement park ride. The other way is to give holographic lenses to people to wear as glasses in a theater. The illusion can be created by different gradients between those two, film near to the user or progressively further away. Bifurcating the views between each eye in a way that the brain decides to merge them as a spatial volume is the real trick. Recently intermediate distance screens for rendering stereoscopic views are coming to the consumer market. In these interactions, you don't have to wear glasses because a film on the surface of the display creates the split image perspective. The 3D displays of Sony and Leia achieve their optical illusions on the surface of the computer screen, positioned within a yard of the viewer, extrapolating where the viewer's head is positioned, thereafter flashing the two different images intermittently in different directions. 

I am fascinated by content marketplaces and developer/publisher ecosystems for the internet. So I and my colleagues spend a lot of time talking about how the content creator and software developer side of the market will address content availability and discoverabilty now that stereoscopic head mounted displays (VR and AR headsets or 3D flat screens) are becoming more mainstream. The opportunity to deliver 3D movies to audience primed to enjoy them is now significant enough to merit developer effort to distribute software in this space. Home 3D movie purchase and viewing is currently constrained by the cost of requiring at once the 3D player + 3D screen + 3D movie. A simplified content delivery process over the web or a combination of those three elements will produce lowered cost, broader availability and greater scale of addressable audience.

Hollywood has been making stereoscopic content for cinematic distribution for decades and will continue to do so. Those budgets are very large and some of those budgets are trickling down to architectures that will make the overall cost drop for other content creators over time. As an example, Pixar’s 3D scene format "Universal Scene Description" is now open source and can be leveraged for holographic content creators in mobile and desktop computers today. USD support is now embedded in iPhone’s ARKit such that 3D holographic streaming on everybody’s computers is just a hand’s reach away if the content distribution network were in place.

What can we do to bring 3D depth back to formerly released media? I remember my father taking me to see a 3D rendering of Creature from the Black Lagoon from the 1950s. Each frame of the movie was slightly offset with a blue halo to one side and a red halo to the other side. When we sat in the theater wearing “anaglyph 3D” viewing glasses that had red and blue lenses, our minds would think they were seeing depth as the lenses adjusted to the length of the red and blue shading around the actors and landscapes on the flat movie screen. Filming that movie required the actors to be stereoscopically filmed with two cameras so that it could be rendered to theaters that provided audiences with either anaglyph 3D or polarized lens 3D glasses.

However, today creating scenes that appear slightly different from each other, offset by 3 inches of interpupillary distance of eyes for instance, is is a tactic achievable with radiance field photography. (Read more about radiance fields and the formation of novel views here.) Leveraging neural radiance fields allows a scene, that is otherwise a still photograph, to be wiggled side to side generating a slight senses of depth perception. An alternative approach is to artificially generate the interpupillary offset by entity isolation artificially in the interpreted image leveraging a process in the graphical processing unit (GPU) of the local device. AI rendering capabilities could enable us to synthesize depth into older movies that lacked stereoscopic cinematography previously. 

Depth-simulation in the gaming sector captivated my colleagues a few years back when we discussed the process of using a game engine to re-render a legacy video game on a PC to synthesize an artificial depth perspective. (See Luke Ross R.E.A.L VR Mods to understand how this effect is achieved.) PC modification of HDMI signal output is a relatively simple trick of flickering different perspectives very fast between each eye in a VR headset. The spatial layout of a game environment is previously coded by the game developer with the position of the player and nearby objects interpreted at game play as the coordinates of motion are communicated by the player and sent to the player's display as a flat image. The game works just fine if there are two cameras inserted into the rendering instead of just one in the original game design. This allows a whole trove of legacy 2D games to be experienced anew in a different way than players had first enjoyed them. Currently there is a very small market subset of PC-gamers who are opting for this intermediary layer re-rendering of flat screen games to enjoy them deeply in VR. "Flat2VR" community has a discord channel dedicated just to this concept of game modding old games with some developers offering gesture mapping to replace former game controller buttons. The magic of depth perception in modded PCVR games happens on the fly on the player's computer for each game by having the game engine render two scenes to a VR headset's dual ocular screens. It requires an interpreting intermediary layer that the player themselves has to install. 

In theory, the intermediary/interpretive layer that inflates the flat scene to be depth-rendered dynamically on the player's screen is not exceedingly complex. But it does require extra work from the GPU that encodes the two outputs to the player's screens. A similar feat couldn't be done easily by a mobile device or an ordinary game console. However, a set-top box could, in theory, render other types of media beyond games to depth perspective on the fly just as the game engine approach does for old games. The same alternating-eye-method used in Flat2VR game modding could introduce fake sense of volume perspective into the background of a movie even without a leveraging a 3D game engine.

Apple has introduced a new capability to infer spatial depth back into 2D photographs we have taken in the past, as individual users. I enjoy viewing my photographs afresh with in 3D interpreted depth perspective in my Vision Pro even though the pictures were taken 20 years ago in 2D. These aren’t holographic renderings because the original depth of field wasn't captured in the pixels. The rendered image is re-synthesizing estimations of what the original scene looked like with depth added based on distinctions between subject and background elements rendered on the device trained by a machine learning process across thousands of other photographs. Apple didn't need to receive a copy of the picture (in this case of my brother standing in front of St. Mark's Cathedral in Venice) to know how far in front other people my brother was standing at the time of the photo. The position of his feet appearing in front of the flat ground behind him let the machine learning algorithm extrapolate his position spatially by assuming the ground is flat and that he was closer to me than the other people who appear smaller in the flat image. These photos are generally more fascinating to stare at than the originals because they lure the eyes to focus on different levels in the background of the original photo instead of just resting immediately on the initial subject.

Depth inference by an machine learning could also achieve the same effect for older 2D movies from the last century by the same means used estimate depth in stills. This means a treasure trove of 3D movies could await at the other end of our streaming web services if they could be rerendered for broadcast by an depth analysis layer to the stream or prior to streaming, similar to the real time Flat2VR depth-rendering display mods.

Whether our industry moves forward by machine learning re-rendering of back catalog 2D movies for mass consumption on a central server (leveraging a new distribution methods) or whether we take the path of doing depth-rendering in a set-top box approach, for all content already delivered to the home dynamically, remains to be demonstrated and tested in the marketplace. In a market constrained by the availability of HMD and stereoscopic flat screen adoption, the latter approach makes more sense. Yet the decreasing cost to manufacture these new screens may mean studio efforts to focus on depth-rendering of an entire back catalog of films makes sense for new dedicated 3D streaming channels across a diaspora of stereoscopic display options. This mode of development and distribution would be possibly parallel what is already happening in the Flat2VR gaming community. Over just 2 years since my following the sector, I've seen several major game studios jump into re-releasing their own titles as dedicated VR games, sometimes with almost no embellishment beyond the story of the original game other than depth, yet unlocking new audience access for the old titles just by doing so. However, gaming market moves much faster than mainstream TV media as gamers are already fully bought into the expensive GPUs and HMDs that make the art of 3D rendering a minor additional expense. My estimate is that it will take only one decade to make 3D viewing of all legacy 2D content an obvious and expected path for the consumer market. We may look at legacy 2D movies as a diminished experience in the near future, wanting to have our classic films embellished to show depth the way that black and white movies are now being released as colorized versions on web streaming channels. 

Having a movie studio re-render depth into all legacy content is expensive. But it makes sense at scale, rather than having all consumers buying a set-top converter to achieve the same ends. It's a question of how many people want to benefit from the enhanced means of viewing, and who we want to pay the cost of achieving the conversion. We have 2D movies now because it was the easiest and cheapest means to achieve mass media entertainment at scale at the time. Now as we reach a broader distribution of 3D displays, we approach the moment where we can re-inflate 3D into our legacy creations to make them more similar to the world we inhabit.





Monday, December 23, 2024

Reflections on the evolution toward volumetric photography

During college I read Stephen Jay Gould’s books on natural history in the animal kingdom with fascination. He writes extensively on the many different paths distinct species took to develop eyes through successive enhancements in different branches of the tree of life. The development of eye designs and uses were randomly selected for benefits conveyed over time in enhancing survival traits for those species with them over lesser complex traits of their predecessors as biological competition increased over time. In science workshops as a youth, I designed pinhole cameras simulating the pupil of the eye and enjoyed taking apart old cameras to study how their shutters worked. When planes land, I’d notice inverse images of the ground projecting on the ceiling of the plane’s cabin, like a retina image, through the pupil-like windows. I’d study and ponder about General Relativity, the cosmic limits of light’s speed, and its implications about the nature of the cosmos and its origins.

With this obsessive fascination with light and seeing, it makes sense that I’ve been an avid photographer for decades. I always look out for the newest tools to capture and save light reflections of the places I travel. Now that stereoscopic and 360 cameras are coming into the mainstream, my hobby as a light collector has had to shift accordingly. My 2D photos can be conveyed well in my photo books. However these newer forms of photographic media need to be shared and enjoyed with different means of re-reflection, depending on the nature of the scene captured. Photospheres of 360 cameras are best experienced in spherical projections, like a planetarium, which can only be seen in a head mounted display. (HMD) Stereoscopic depth images can be viewed on both 2D and 3D flat screens, but are also better appreciated in HMDs. Now the cameras to capture these new scenes outnumber the volume of HMDs sold in the consumer market. So the audience for these 3D scene captures is likely limited to those who craft them and share them peer-to-peer.

This will shift as more companies popularize their own new form factors for head worn displays and glasses, be they pass-through visors offering augmented views of the normal world (Google Glass, Snap Spectacles, Magic Leap, Xreal) or dual purpose with both AR and VR modes (Pico, Quest, Vive, Varjo, Vuzix, Vision Pro, Moohan, etc.) As you can see with a plurality of different devices people have to choose from, it’s clear there needs to be a common platform for photographers and motion capture enthusiasts to distribute their media. App based content distribution would be really limited to address the full breadth of the markets’ device form factors. But they currently bridge us to the future of unfettered browser platform support. It is clear that web based utilities are necessary to facilitate ubiquitous across the panoply of devices. This is what makes me very excited about the cross-device web coding standard WebXR, which is now supported across all major web browsers.

WebXR hosting and streaming will be the easiest path for content creators and photographers to give access to their captures and artistry. Affordable and comfortable HMDs or flat-screen stereoscopic displays (made by Leia or Sony for instance) are going to gradually scale up to supporting mass audiences who today can only experience this media on a borrowed or shared device. With Apple announcing official support for WebXR in Safari last year, and Google’s recent announcement of their renewed investments in 3D rendering with Android XR operating system for HMDs, the ease of access to experience spatial depth pictures and 3D movies will soon be broadly accessible on affordable hardware. Many people may not particularly want to see photospheres of Italian architecture or watch action movies, like Avatar, streamed in 3D depth because of being susceptible to motion sickness or vertigo. But for those who do, cost of access is soon not going to be such a limiting factor as it has been over the past decade. Price points for HMDs and 3D flat screens will fall due to increased manufacture scale, lower component cost and increased competition. Now that the hardware side of access is starting to become affordable, the content side is a new opportunity space that will grow in coming years for novice creators and photographers like me.

Last year marked my first leap into volumetric photography and videography. I’d read about it for years, but due to lack of good public sharing platforms, hadn’t taken the leap to sharing anything of my own. There are a few compelling applications in HMD app marketplaces for peer sharing of stereoscopic video streaming, 360 and wide angle photographs. But what I’m really excited about now is volumetric scanning of public scenes. This is yet another spin on photography that allows a feeling of presence in the space photographed because of distributed perspective of the light field of the scene, meaning different angles of reflection on all points of space inside the rendered volume. These aren't captured by one shutter click at one moment of time, but multiple perspectives over a span of time, distilled into a static scene that can be navigated after capture. In volumetric captures, you can walk through the space as if you were there in the original scene. Your camera display, or your avatar in HMDs, will show you different positions in space as areas of a point cloud look from any particular perspective you navigate to within the still spatial image.

This kind of volumetric capture was formerly only accessible to professionals who used lidar (laser reflection) cameras to generate archeological, geological or municipal landscape scans leveraging moving cars, satellites, drones or with large camera rigs. This approach was used to create Nokia Here Maps (now branded Here), Keyhole (now branded Google Earth) as well as satellite maps of Earth’s crust that reveal archeological and geological formations such as this scan of the Yucatán peninsula, which revealed stone structures underground of a previously undiscovered Maya city.

Lidar scan of Maya city in Campeche region (courtesy BBC)

Years ago I had my first experience in a simulated archeological site when my brother-in-law gave me a tour of a reconstructed dig site in stereoscopic simulation. Thereafter, I sought out the founder of the Zamani Project, a team of archeologists who scan ancient anthropology sites in Africa and the Middle East to preserve their structures for remote study. Would it be possible to make these structures discoverable on the web I wondered? I had explored the ancient cave city of Petra by foot and wandered the vast cities of Chichén Itzá and Uxmal in the Yucatán on my photographic expeditions. I wanted to go back to them virtually. But there isn’t currently such a means to do this. Online map versions of these amazing sculptural sites were rendered as flat photographs. Then I learned more about aerial photogrammetry. In contrast to lidar scans of Zamani Project, photogrammetry allows a rough topography to be captured and synthesized in a computer aided design program such as Capture Reality. It's much less expensive than lidar scanning. Upon meeting drone photogrammetry experts, I tried to price out a project to put Chichén Itzá on the web-map in a way that people at home could experience these archeological sites remotely. Cost wasn’t a particularly high barrier I learned. Even I could afford to do it as a civilian enthusiast it seemed. The challenging part was the authorization to pilot drones over an archeological site and subsequently republish them which needs to be obtained from local government.

I’d first discovered photogrammetry when Microsoft announced its Photosynth service over a decade ago. The same way our eyes assemble our comprehension of 3D depth using optical parallax of our two eyes' blended perspective, Photosynth would assemble photos of hundreds of perspectives to infer the geometry of large spatial objects. Though stitching of photos into a 3D simulation can be somewhat expensive in terms of compute time, the increase in cell phone computing power of today means that the processing of these images can be offloaded to the image-capturing mobile phone instead of the more expensive centralized rendering of on a server. Over the past couple of years I followed the emergence of neural radiance field research, abbreviated as NeRF, which was re-invigorating the approach of 3D map views  accessible to the public. (Read about the photo stitching process demonstrated in the Block NeRF research coming out of UC Berkeley and Alphabet company Waymo which assembled millions of photos taken from self driving cars to create a navigable map of the city formed only from stills taken from the onboard cameras. Note also that moving objects and people's ephemeral presence is abstracted out of the still images over time as their transience is removed from the underlying point cloud of the city. Privacy protection is therefore a useful attribute of scene capture over time as well.)

Last year I took online classes for photographers to learn on how to do photogrammetry scans and self publish them. These “holographic” images are called Gaussian Splats. This is the newest approach to doing photogrammetry to assemble sculptural depictions of 3D landscapes. The teacher of my course was Mark Jeffcock, a photogrammetry specialist based out of the UK who captures amazing landscapes and architecture using apps that allow the export of Gaussian Splat images to be uploaded for web access. (See his amazing capture of a sculpture he titles Knife Angel hosted on Arrival Space.) 

Knife Angel by Mark Jeffcock
Though spatial photos and videos using parallel cameras is just getting started in the mainstream, I anticipate this new standard of volumetric capture is going to keep pace with the others as an engaging way for VR HMD users to get into sharing real world settings with each other. The question is how to make these volume captures broadly accessible. Both Arrival Space and Niantic are jumping into the opportunity to offer peer-to-peer hosting of Guassian Splats. (There is a hint that Varjo and Meta may eventually introduce peer-to-peer sharing of Gaussian Splats in the future, though probably only within an app contained and logged-account context.)

If you are keen to explore some Gaussian Splats in your own HMD or 2D browser, I encourage you to visit Arrival Space to see some of the community scans that are shared by the 3D photographer community in the galleries there. Though I am just a beginner at volumetric captures you can start off in my gallery to see how I tell stories with the scans I make. Creating Gaussian Splat scene renders takes a lot of time, as areas of the scene will appear blurry when the photographer hasn't dwelt long enough on any certain portion of the scene. I still remember during my classes last year when Mark said, "This is a really good capture. But you forgot to point the camera at the ground." Because of this, our class was standing in a Gaussian Splat capture that had nothing to stand on. Becoming a true 3D photographer means that we have to think like cinematographers, capturing how a scene draws the eye over time. Or we need to use a camera lens broad enough to capture photons from angles that we typically ignore when we frame a photo. (Fish-eye lens splat capture is currently being researched, by a company called Splatica, which takes full fish-eye movie footage of a scene to render the 3D still capture.) I anticipate that in the coming year, dozens of new cameras and new hosting platforms will emerge to address peer-sharing of amateur photography. If this media captures your imagination, then this is a perfect time to become one of those photographers. If you have ambition to try your own hand at creating Gaussian Splats, start off by trying Scaniverse which allows easy exporting of the ply/spz 3D file types necessary for uploading to your own personal galleries on Arrival Space. I encourage you to get out and explore aspects of your environment and culture that you can share with others across the internet now that WebXR makes it possible for us to share spaces with those far away.

Note that if you are seeking to explore my travel scans in 3D spatial depth, you'll need to open that URL in a 3D viewer or an HMD browser. You'll have a default avatar to explore these spaces and talk to other people. You can change that with a login. Get started at https://arrival.space/ with your own personalized URL.

For in-depth reading on Gaussian Splat capture, see more from New York Times R&D team.

Sunday, November 24, 2024

What does it want to say? (An approach for chasing bugs)

 Long ago when I was studying French, I came across an idiomatic means of expressing a question about a word or thing. If you want to ask a French speaker to help you with defining a word’s meaning, you can ask, “What does it want to say?” (Qu'est-ce que ça veut dire?) I liked the idea that the word actually is personified as a willful entity in this question’s framing. The word has a desire or a will that needs to be considered. Over time, I used to think of technology problems in this way. If a computer wasn’t functioning properly, I’d frame it as “What is the computer wanting to say/do? And how am I seeing it try to do that? And what’s the result? If you ever have a strange behavior of an app or computer process, walking through the steps to diagnose how they communicate will sometimes help you troubleshoot. By narrowing down the steps in their way of reasoning/acting, you can often isolate the problems and facilitate the process such that the device can achieve it’s goal, which is a proxy for your goal in using the device.

If your phone or computer wants to find an internet connection. Is there one? Is it LAN cable, wifi, Edge, LTE, 3G, 5G? (Each is slightly different in the process of connecting and what volumes of data it can transmit which can gate your device from accessing or sending some information.) If your device “wants” to use that internet connection to reach a website to refresh its content, what happens or doesn’t happen when that request to the server returns a response? Gradually from the keyboard to the screen to the operating system to the network to the connected service provider, you can observe each step in this flow. When I worked in Tokyo providing search engine services to internet portals, my clients would sometimes call me on the cell phone asking me to diagnose an issue. I could run an isotope query or trace route through the network to test their servers’ response and connection to my company’s servers. It’s like the game of Pooh sticks from the Winnie the Pooh stories. If you drop a stick on one side of a bridge, you can watch it come across under the other side of the bridge and have races with your friends, or bears, to see whose sticks come out on the other side the fastest. This is the same process as doing device level troubleshooting. Who is saying what how?

I recently had to troubleshoot a perplexing connectivity issue. My father so enjoyed the process of going through this with me that he asked me to write about it. He could find no web documentation of the issue he’d encountered as he was experiencing it. My father worked as a systems engineer for many years at IBM and wrote programs in dozens of languages from early mainframe computers to modern day Macs. He knows pretty much every trick that a Mac can do and has followed computer web forums for years to expand his understanding of how the Apple operating systems had shifted from pre system 10 (aka OSX), through the AMD and Intel chip phases to modern “Apple Silicon” chips.  So him reaching out to me about a technical bug is somewhat of a rare thing. It was usually the other way around. But I was able to narrow down the observable symptoms to several potential root causes until finally figuring out the bug as a network issue rather than an operating system or hardware issue. In case this issue affects your home computer/network, or if you’re just curious the steps involved in doing this, here is how we sorted it out.

My father’s computer had a peculiar symptom that he’d never seen before in 30+ years of working on Macs, that I’d also not seen.  Inside perspective, the symptoms were that he couldn’t access bank websites on his new Mac devices. He couldn’t do a phone-home system re-install from Apple servers nor contact Apple support from the machine’s integrated communication system. The symptom didn’t happen on his wife’s older operating system on an Intel-chip Mac. So he narrowed down his conclusion that it was either his Apple Silicon Mac or the operating system updates, which had recently been upgraded, which were potentially the source of the issue. Outside perspective, his FaceTime (VOIP calling) availability disappeared for me when networking to him. Apple's servers were telling my devices that they couldn’t find him on network. So I expected something was wrong with his ID, account or potentially a compromised device. Because I’d read of phishing tactics in the press and how to avoid them, I started by triaging if there was a potential malware issue with his machine. It didn’t seem that was the case. Everything else on his Mac worked, except for those apps that required web resources to be fetched securely by an internal web-dependent function. He checked with his banks to ensure there was no suspicious activity in his accounts nor attempts to reroute or reset his bank logins recently. We established a secondary channel of communication and assured that his account phone numbers had not been redirected. Once we sorted out that he wasn’t subject to any immediate risk, we took to testing and ruling out other potential issues.

Source: Wiki Commons
Cutting to the chase scene: Ultimately, after ruling out as many factors as we could, we isolated the issue as being an IPv6 problem. Over a decade prior I had attended a lecture on how the web industry was transitioning to a process of generating IP addresses for the planned future of broader range of computers coming online over future decades. IPv4 process for issuing IP addresses for devices, to identify themselves as unique entities across the web, was going to reach a scaling issue akin to the old Y2K issue that took place at the turn of the 21st century. (More information about that elsewhere as it’s analogous but not directly related. It had to do with date formats used in computers not device namespace on the web.) The new IPv6 process of self-identifying devices over a network would use a much wider range of values than IPv4 addresses, meaning that there would be lower risk of any two devices being confused with each other and creating network conflicts due to simultaneous inbound connection requests. This lecture was deep in my mind, but the memory was triggered because of a comment someone made about the IPv6 transition resulting in higher security of networks in the future. It was relevant here because my father’s computer was somehow communicating in a high-trust context, banks and Apple networks, in a way that wasn’t being accepted by those parties on the other end because of the way they were being contacted by his devices. The banks were not responding because they didn't accept the incoming request as valid for the high-trust context. Why would other computers work and the newer computers not? Could there be a difference in how Intel-chip Macs and their operating systems convey TLS (Transport Layer Security) traffic over the web? Sure enough, that was appearing to be the issue. The banks and the Apple netowork were accepting the IPv4 traffic from the older Mac. But the newer Macs and their respective operating systems were transmitting IPv6 values which weren’t getting through the network to establish that trust necessary to proceed. Once we configured his network to route IPv6 values generated by the OS his computer, browsers and applications started functioning flawlessly again. You can read much more in depth about IPv4 and IPv6 elsewhere, but suffice it to say that there was nothing wrong with his Mac. It was the attempts to communicate secure device-unique values over the network that were failing.

I hope that you don’t run into a network routing problems like he had over these winter holidays. Bank customer service, Apple customer service and even your internet provider may not be familiar with your home computing or network setup. They also may have difficulty understanding what issues you face based on how you describe the problem. But tracking down how the symptoms represent, will help you communicate with them, or your relatives, friends or technical support services to resolve whatever challenges you face.

In the computing era we delegate our willful processes to these device "agents" that act over the web on our behalf. Just like humans, they can get tripped up on the way to saying things, or the channel through which to express them. Like studying a foreign language, we can examine the terms our agents use to help them communicate for us more effectively. When their speech breaks down, we have only to examine the vocabulary and steps they use to get across their "meaning" and thereby return them to functioning eloquently on our behalf.

(Info Link) For more on IPv6 see: https://en.wikipedia.org/wiki/IPv6

(Non-paid promotion of French) Learn more French from my favorite French podcast by Louis: https://podcasts.apple.com/us/podcast/learn-french-with-daily-podcasts/id191303933

(Non-paid shout out to CES) Special thanks to the Consumer Electronics Show for offering the lecture on IPv4 vs IPv6 that set us on the right track in this particular case.

Further details:

For those who want to follow the troubleshooting steps we used, the important clues and conclusions and the route to isolating the problem behavior, details are:

Key issues:

  • Important steps in the investigation were first the non-working FaceTime VOIP service and his device’s inability to connect to Apple servers. This showed that it was a two way problem across multiple applications while non-sensitive web traffic was unhindered.
  • Testing the IP address configuration was the main key to resolving it. My computer registered an IPv6 address when querying whatismyipaddress.com from outside his home. His computer registered an IPv4 address but not an IPv6 identity for web traffic. 
  • Then, when I tested my computer in his home environment, my computer experienced the same issue as his. (I used a more recent beta version of MacOS than he did.) Replicating the bug with a different machine on a different version of the OS conclusively proved that the network was the gating source of the problem.

Questions and steps in our exploration narrow down to the answer:

  • Had his IP addresses been flagged as a phishing or malware source, leading to banks blocking traffic? (Confirmed not.)
  • Had his phone number been re-routed recently in a way that hinted a shift in the trust relationship the bank had with his account?
  • Operating System issue? Could we try a fresh install of a base operating system. (Not possible in his case because Apple silicon OS doesn’t allow boot from terminal mode on an external machine the way AMD/Intel chip Macs did. Both of his Macs couldn’t revert to his wife’s OS because of OS incompatibility between recent Intel chips and newer Apple silicon chip devices.) 
  • Browser issue? He was accessing banks via several browsers, all failed, while regular non-logged-in sites would function fine across all browsers. (This means there was not a browser-dependent issue causing the problem. But because secure sites were failing to load, including Apple, it made me suspect Transport Layer Security over TCP/IP was the problem of some kind.
  • Cable internet access restrictions? Because some cable internet providers give parental controls, I suspected something that Comcast had done could have rolled out a traffic throttling limit for some accounts or all customers in a region inadvertently. Did any of his friends who used this provider complain of loss of access?
  • We reset the DCHP settings of his computer to no avail once we expected networking to be the problem.
  • Finally we bypassed his router which was the main gating device and thereby resolved the issue. We resolved to let his router be used for non-sensitive traffic around the house, but not sensitive or secure traffic from newer devices in the home.

Sunday, October 15, 2023

Using VR comfortainment to bring an end to the US blood supply shortage

I conducted my MBA during a fascinating time in our world economy. We’d endured through a pandemic that shut down significant portions of our economy for nearly a year followed by surging interest rates as government response to the pandemic resulted in significant inflation and subsequent layoffs in my region. While this was a dramatic time for the world, it was a fascinating time to return to academia and evaluate the impacts to the global economy of natural and artificial stimuli.

For our masters thesis we were asked to identify an opportunity in the economy that could be addressed by a new business entrant. In discussing with several of my MBA class cohort, we decided to focus on the blood supply shortage that resulted from the end of the pandemic. Why would the US go into blood crisis at the end of the pandemic we wondered. Shouldn’t that have been expected during the peak of the pandemic in 2020 or 2021? But it turns out that during the pandemic surgeries and car crashes dropped at the same time that blood intake to the supply dropped. It was only after the pandemic ended that supply and demand got out of sync. In 2022 people started going to hospitals again (and getting injured at normal rates) while the blood donor pool had significantly shrunk and not recovered its pre-pandemic rate of participation. So hospitals were running out of blood. What's more concerning is that it looks as if the drop in donor participation isn't a short term aberration. Something needs to shift in the post-pandemic world to return the US to a stable blood supply. This was a fascinating subject for study.

As we began our studies we interviewed staff at blood banks and combed through the press to understand what was taking place at this time. There were several key factors in the drop-off of donors. Long-Covid had impacted 6% of the US population, potentially impacting willingness to donate among those individuals who’d participated before. (Even though blood banks accept donations from donors who have recovered from Covid, the feeling that one's health is not at full capacity impacts the sentiment one has about passing on blood to another.) At the same time there was a gradual attrition of baby boomer generation leaving the donor pool while younger donors were not replacing them due to generational cultural differences. Finally, the new hybrid-work model companies adopted post-pandemic meant that blood-mobile drives that took place at companies, schools and large organizations could no longer receive the same turnout for blood drives that had formerly taken place at those locations.

The donation pool we’ve relied on for decades requires several things. So we tried to identify those aspects that were in the control of the blood banks directly:

  • First, an all-volunteer unpaid donor pool requires a large number of people in the US (~7 million) willing to help due to their own internal motivations and having the ample time to do so. Changing people’s attitudes toward volunteerism and blood donation is hard to do while marketing efforts to achieve this are expensive. In an era when more people are having to work multiple jobs, the flexibility to volunteer extra time is becoming constrained. There is likely going to be an ever worsening trend of time scarcity among would-be donors in contrast to the pre-pandemic times.
  • Second, there needs to be elasticity in eligible donor pool to substitute for ill would-be donors in times of peak demand. Fortunately, this year FDA has started expanding eligibility criteria in reaction to the blood crisis, permitting people who were previously restricted from donating to participate now. However, this policy matter is is outside the control of blood banks themselves. Blood demand is seasonal, peaking in winter and summer. But donors are consistent and are difficult to entice when need spikes due to their own seasonal illnesses or summer travel plans.
  • Third, and somewhat within the control of blood banks, is in-clinic engagement and behaviors. Phlebotomists can try to persuade upgrades in donor time during donor admission and pre-screening. This window of time when an existing donor is sitting in clinic is the best time to promote persistent return behaviors. Improving the method of how this is achieved is the best immediate lever to bolstering the donor pool toward a resilient blood supply. But should we saddle our phlebotomists with the task of marketing and up-selling donor engagement?

Considering that there is no near-term solution to the population problem of the donor pool, we need to do something to bolster and expand the engagement of the remaining donors we have. In our studies we came across several interesting references. "If only one more percent of all Americans would give blood, blood shortages would disappear for the foreseeable future." (Source Community Blood Center) This seems small. But currently approximately 6.8 million Americans donate blood, less than 3% of Americans. So it's easy to see how a few million more donors would assuage the problem. But the education and marketing needed to achieve this end would be incredibly expensive, slow and arduous to achieve. It’s hard to change that many minds in a short time frame. Yet this comment from the same source gave us an avenue to progress with optimism: "If all blood donors gave three times a year, blood shortages would be a rare event. The current average is about two." We agreed that this seemed like a much more achievable marketing strategy. In our team calls, Roy Tomizawa commented that we need to find something that makes people want to be in the clinic environment beyond their existing personal motivations for helping others. He suggested the concept of “comfortainment” as a strategy, whereby people could combine their interest in movie or TV content with time they’d sit still in the clinic for blood donation, dialysis or other medical care. If we were to transform the clinic from its bright fluorescent-lit environment into a calm relaxing space, more people may wish to spend more time there.

As a life-long donor, I've heard a lot of promotions to increase the frequency of donation while in clinic. But during intake so many things are happening. 1) FDA screening questions, 2) temperature check, 3) blood pressure measurement, 4) hemoglobin/iron test, 5) verbal confirmation of no smoking or vaping. This battery of activity is an awkward time for phlebotomists to insert promotional campaigns on increasing engagement. One day I noticed some donors were doing something different in the blood bank and I asked about it. Then I was informed how the blood apheresis process differs from whole blood donation. It involves the use of a centrifuge device that can collect more of a specific component of blood product at time of draw from a single donor then returning the rest of the blood to the donor. Not only does this yield multiple individual units of blood per draw, the recovery time between donations is shorter. Whole blood donations require 2 months of time for the donor to replenish their blood naturally before another whole blood donation. Apheresis donors lose less of overall blood and can therefore return more often. The only downside of this is that it requires more time from the donor in-clinic.

Because apheresis was the most flexible variable that blood banks could impact as demand and supply waxed and waned, our study zeroed in on optimizing this particular lever of supply to address the blood shortage. In a single blood draw via apheresis, a donor can provide 3 units of platelets, compared to whole blood draws. This allows the blood bank to supply three units immediately after draw to hospitals instead of having to use a centrifuge on post-donation pooled units of whole blood from multiple donors. Platelets are uniquely needed for certain hospital patients in the case of cancer patients or among those with blood clotting disorders. Regarding other blood components, an apheresis blood draw can provide 2 times more red blood cells than what would otherwise be donated as whole blood. At the same time that a donor is providing platelets, they may also provide plasma in the same draw, which provides leukocytes which can help patients with weakened immune systems by providing natural antibodies from healthy donors.

Hearing all this you might think that everybody should be donating via apheresis. But the problem with it is the extra time needed, an additional hour of donor time at least. A donor planning to donate for just a 15 minute blood draw may be reluctant to remain in apheresis for one to two hours, even if it triples or quadruples the benefit of their donation. Though this is one factor that can be immediately augmented based on the local hospital demand, asking donors to make the trade off for the increased benefit can be a hard sell. 

When I first tried apheresis, I didn’t enjoy it very much. But that’s because I don’t like lying down and staring at fluorescent lights for long periods of time. Lying on the gurney for 15 minutes is easy and bearable. Having phlebotomists try to persuade hundreds of people to change their donations to something much more inconvenient is a difficult challenge. Some blood banks offer post-donation coupons for movies or discounts on food and shopping to promote apheresis donations. My team wondered if we could we bring the movies into the clinic the way that airlines had introduced movies to assuage the hours of impatience people feel sitting on flights. Having people earn two hours of cinema time after donation by sitting still for two hours in clinic begs the question of why you couldn't combine the two together. Donors could watch IMAX films at the clinic when they'd plan to be immobile anyway!

We interviewed other companies which had launched VR content businesses to help people manage stress, chronic pain or to discover places they may want to travel to while they're at home. We then proceeded to scope what it would take to create a device and media distribution company for blood banks to entice donors to come to the clinic more often and for longer stays with VR movies and puzzle games as the enticement. Introducing VR to apheresis draws doesn't create more work for phlebotomist staff. In fact one phlebotomist can draw several apheresis donations at once because the process provides an hour between needle placement and removal as idle time. So while we increase yield per donor, we also reduce the busywork of the phlebotomy team, introducing new cost efficiencies into the clinic processing time overall.

Consumer grade VR headsets have now decreased in price to the level that it would be easy to give every donor an IMAX-like experience of a movie or TV show for every 2 hour donation. To test the potential for our proposed service, we conducted two surveys. We started with a survey of existing donors to see if they would be more inclined to attend a clinic that offered VR as an option. (We were cautious not to introduce an element that would make people visit the clinic less.) We found that most existing donors wouldn’t be more-compelled to donate just because of the VR offering. They already have their own convictions to donate. Yet one quarter of respondents claimed they’d be more inclined to donate at a clinic where the option existed rather than a clinic that did not offer VR. The second survey was for people who hadn't donated yet. There we heard significant interest in the VR enticement, specifically among a younger audience.

Fortunately, we were able to identify several other existing potential collaborators which could make our media strategy easy to implement for blood clinics. Specifically, we needed to find a way to address sanitation of devices between use, for which we demoed the ultra-violet disinfection chambers manufactured by Cleanbox Technologies. If donors were to wear a head mounted display, they would need to make sure that any device that was introduced to a clinical setting had been cleaned between uses. Cleanbox is able to meet the 99.99% device sterilization standard required for use in hospitals, making them the best solution for a blood clinic introducing VR to their comfortainment strategies.

Second, in order for the headsets to have regular updates and telemetry software checks, we talked to ArborXR which would allow a fleet of deployed headsets to be updated overnight through a secure update. This would take device maintenance concerns away from the medical staff onsite as well. Devices being sterilized, charged and updated overnight while they weren’t in use could facilitate a simple deployment alongside the apheresis devices already supplied to hospitals and blood banks through medical device distributors, or as a subsequent add-on.

Using the Viture AR glasses at an apheresis donation

While we hope that our study persuades some blood banks to introduce comfortainment strategies to reward their donors for their time spent in clinic, I’ve firmly convinced myself that this is the way to go. I now donate multiple times a year because I have something enjoyable to partake in while I’m sharing my health with others.

I’d like to thank my collaborators on this project, Roy Tomizawa, Chris Ceresini, Abigail Sporer, Venu Vadlamudi and Daniel Sapkaroski for their insights and work to explore this investment case and business model together. If you are interested in hearing about options for implementing VR comfortainment or VR education projects in your clinic or hospital, please let us know.

 

For our service promotion video we created the following pitch which focuses on benefits the media services approach brings to blood clinics, dialysis clinics and chemotherapy infusion services.






Special thanks to the following companies for their contribution to our research:

Quantic School of Business & Technology 

Vitalant Blood Centers

Tripp VR

Cleanbox Technologies

Viva Vita

Abbott Labs 

International VR & Healthcare Association

VR/AR Association

Augmented World Expo

Sunday, March 19, 2023

The evolution of VR spaces and experiences

Six years ago, Meta launched the first consumer version of its VR headset, the Oculus Rift CV1. I had my first experience of that new media interface at San Francisco's Game Developer Conference (GDC). Oculus technicians escorted me into a sound-proof dark room and outfited me with the headset attached to an overhead boom that would keep the wires out of my way as I experienced free motion simulated environments that were crafted in Epic's Unreal Engine world-building game architecture. (This is the same developer environment that was used to create The Mandalorian TV series.) The memory of that demonstration is strong to this day because it was such a new paradigm of media experience. As I moved in a simulated world, parallax depth of distant objects shifted differently relative to those objects near. Everything appeared a bit like a cartoon, more colorful than the real world. But the sense of my presence in that world was incredibly compelling and otherwise realistic.

Yesterday I went into a physical VR gym in Richmond, California with a dozen other people to try a simulated journey where we would physically walk on a virtual replica of the International Space Station. It was profound to reflect on how much the technology has advanced in the six years since my first simulated solitary spacewalk at the GDC. The hosts of the event walked us through a gradual orientation narrative like were were astronauts ascending the top of an Apollo era launch tower before we were set free to roam on the purely visual ISS along with brief video greetings from real astronauts, previously filmed, at the exact locations marked by green dots on the map to the right. When we approached the astronauts, glowing orbs showed camera positions that were filmed on the ISS previously. By standing right where the astronauts were in the filming we could see all the equipment and experience what it was like to live on the ISS for those astronauts.

In a recent interview with Wall Street Journal reporters, Philip Rosedale (the founder of Linden Labs) commented, "The appeal of VR is limited to those people who are comfortable putting on a blindfold and going into a space where other people may be present." Here I was, actually doing that in a crowd of people I had never met before. All I could see of those people was a ghostly image of their bodies and hand positions with their gold/blue/green heart beacon indicating their role as fellow VR astronauts, family members for those in a group, or the event staff who kept an eye out for anyone having hardware or disorientation issues with the VR environment. Aside from an overheating headset warning and a couple of times the spatial positioning lost sync with the walls of the spaceship, I didn't have any particular issues. It was very compelling!

Six years ago at GDC, I remember a clever retort a developer shared with me at the unveiling of the Rift CV1. While waiting in line at the demo booth, I asked what he thought about nascent VR technology. He said, “Oh, I think it will be like the xBox Kinekt. At first, nobody will have one and everyone will want one. Then, later, everyone will have one and nobody will want one!” Now, years later, we can look back in retrospect to see what happened. VR didn't reach a very broad market penetration yet because of rather high price of hardware. But when the pandemic shuttered the outside world to us temporarily, many of us took to virtual workrooms to meet, socialize and work. Meta was well positioned for this. Zoom conference calls felt like flashbacks to the Brady Bunch/Hollywood Squares grid of tic tac toe faces. Zoom felt oddly isolating in contrast to sharing spaces with people physically. Peering into people’s homes also seemed a little disturbing. Several engineers and product managers I frequently meet with suggested we switch to VR instead. One of them challenged me to give a lecture in VR. So I researched how Oxford University was doing VR lectures in EngageVR and conducted my own lecture on the history of haptic consumer technology in an EngageVR lecture room. It was challenging at the time getting around lecture slide navigation and simultaneously controlling my spatial experience of appearing as a lecturer in the classroom. But I succeeded in navigating the rough edges of the early platform limitations. (EngageVR has drastically improved since then, introducing customizable galleries and broader support of imported media assets.)

While the experience felt rough at first, I found it much more compelling than using shared slides and grid camera views of the Zoom conference call format. So my colleagues collaborated with me to create a bespoke conference room where we could import dozens of lecture resources, videos, pdfs and 3D images. In this conference room a large group could assemble and converse in a more human-like way than staring into a computer camera. While we gave up the laptop camera with its tag-team game of microphone hand-off, we took up using VR visors where we could see everybody at once, oriented around us in a circle. Participants could mill around the room and study different exhibits from previous discussions while others of us were engrossed in the topic of the day.

I know that people like us are rather atypical because we adopt technology long before the mainstream consumer. But the interesting thing is that years later, even with the pandemic isolation waning, we all still prefer to convene in our virtual conference spaces! It typically comes down to two choices of where we convene. If it’s a large group, we assemble in the lecture hall hosted on Spatial’s web servers. These are fast paced and scintillating group debates where we have to coordinate speakers by hand waving or following auditory cues of interjecting speakers. If it’s four people or less, we use EngageVR or VTime, which allow for a more intimate discussion. Those platforms have us use virtual avatars that, unlike Spatial, don’t resemble our physical bodies or faces. But the microphone handoff of the dialog is very easy to hear natural language auditory cues of speakers.

“Why does this simulated space feel more personal than the locked-gaze experience of a Zoom call?” I wondered. My thought is that people speak differently when they are being stared at (camera or otherwise) than when they have free moving gaze and a sense of personal space. Long ago I heard an interview with NPR radio show host, Terry Gross. She said that she never interviewed her guests on camera, as she preferred to listen closely only to their voice. Could this be the reason the virtual conference room feels more personal than the video conference?

During my years studying psychology, I remembered the idea of Neuro-Linguistic Programming in which author Richard Bandler lectured that the motion of eyes allows us to access and express different emotions that are tied to how we remember ideas and pictorial memories. In NLP’s therapeutic uses, a therapist can understand traumatic memories discussed in the process of therapy based on how people express with their eyes and bodies during memory recollection. Does freedom from camera-gaze permit better psychological freedom in the VR context perhaps?

In lectures and essays by early VR pioneers, I kept hearing references to people preferring the virtualized environment among those who identified as neurodivergent. In my early study of autism spectrum disorder, I had read that one theory if ASD is an over-reaction to sensory stimuli. Often people who have ASD may avoid eye contact due to the intensity of social interaction. In casual contexts this behavior can be interpreted as an expression of disinterest or dislike. Perhaps virtualized presence in VR can address this issue of overstimulation, allowing the participants to have a pared down environmental context. In an intentionally-fabricated space, everything there is present by design.

I still don’t think the trough of the hype cycle is upon us for VR. (Considering the perspective of my developer friend’s theory about the land of VR disenchantment.) First, VR is still too expensive for most people to experience a robust VR setup. The "Infinite" ISS exhibit costs considerably more than watching a IMAX movie, its nearest rival medium. Yet soon Samsung, ByteDance, Pimax and Xiaomi are coming to market with new VR headsets that will drive down the cost of access and give most of the general public a chance to try it. I'm curious to see when we will get to that point of "everybody having it and nobody wanting it." I still find myself preferring the the new media social interactions because they approximate proximity and real human behavior better than Zoom, even if they still have a layer of obvious artificiality. 

A funny thing is that I have a particular proclivity to preferring visits to space for my VR social sessions. After my GDC experience years ago I downloaded the BBC's Home VR app that simulates a semi-passive perspective of an astronaut conducting a space walk. This allowed me to relive my GDC experience, with a surprise twist involving space debris. Then I tried the Mission ISS walk-through VR app that gives users a simulated experience of floating around inside a realistic looking simulated ISS assembled from NASA photographs of the station. Then, when Meta announced its new Horizon Venues platform, I was able to go into a virtual IMAX theater with a gigantic half-dome theater rendered in front of hundreds of real-time avatars of people from around the world to watch 360 videos from taken from the ISS and produced for redistribution by Felix & Paul Studios. And finally this week I was able to visit the Phi Studios physical walk-through. What I like about this progression is that the experience became more and more social. Getting away from the feeling of the movie Gravity, of being isolated in space.

Yet, for the most social experience of all, my friends and I like to go to an artificial simulated space station hovering 250 miles above earth where we can sit and have idle conversations as a realistic-looking model of Earth spins beneath us. This is powered by a social app called Vtime. When I go here with my colleagues, we inevitably end up talking about the countries we're orbiting over and relating experiences that are outside of our day to day lives. Perhaps it takes that sense of being so far removed from the hum drum daily environment to let the mind wander to topics spanning the globe and outside the narrow confines of our daily concerns. In one such conversation, my friend Olivier and I got into a long discussion about the history and culture of Mauritius, his home country, over which we were then flying. Vtime's Space Station location only has 4 chairs for attendees to sit in at a time. So we use this for small group discussions only. If you ever get inspired to try VR with your friends, I recommend trying this venue for your team discussions. It's hard to say what is so compelling about this experience in contrast to gazing at people's eyes in a video conference. But even after the pandemic lockdown subsided and we could once again meet in person, I still find myself drawn back to this simulated environment. I believe when every one of us has access to this, we will come to prefer it for remote-meetings in lieu of the past decades' 2D panel plus camera.




Sunday, October 2, 2022

Coding computers with sign language

I am one of those people who searches slightly outside the parameters of the near term actual with an eye toward the long term feasible, for the purpose of innovation and curiosity. I'm not a futurist, but a probable-ist, looking for the ways we can leverage the technologies and tools we have at our fingertips today to achieve adjacent potential opportunities leveraging those tools. There are millions of people at any time thinking how to address new applications of any specific technology in novel ways to push the technological capabilities toward exciting new utilities. We often invent the same things using different techniques, the way that eyes and wings evolved via separate paths in nature, called convergent evolution. I remember going to a Google developer event in 2010 and heard the company announce a product that described my company's initiative down to every granular detail. At the time I wondered if someone in my company had jumped the fence. But I then realized that our problems and challenges are common. It's only the approaches to address them and the resources we have that are unique.

When I embarked into app development during the launch of the iPhone, I knew we were in a massive paradigm shift. I became captivated with the potential that we could use camera interfaces as inputs to control the actions of computers. We use web cameras to send messages person to person over the web. But we could also communicate commands directly into code as well if we leverage an interpretive layer to communicate as the computer interprets.

This fascination with the potential future started when I was working with the first release of the iPad. My developer friends were toying around with what we could do to extend the utility of the new device beyond the bundled apps. At the time, I used a Bluetooth keyboard to type, as speech APIs were crude and not yet interfacing well with the new device and because the in-screen keyboard was difficult to use. One pesky thing I realized was that there was no mouse to communicate with the device. Apple permitted keyboards to pair, but they didn't support the pairing of a Bluetooth mouse. Every time I had to place the cursor, I had to touch the iPad, and it would flop over unless I took it in my hands. 

I wanted to use it as an abstracted interface, and didn't like the idea that the screen I was meant to read through would get fingerprints on it unless I bought a pen to touch the screen with. I was acting in an old-school way wanting to port my past computer interaction model to a new device while Apple wanted iPad to be a tactile device at the time, seeking to shift user expectations. I wanted my device to adapt to me rather than having me adapt to it. "Why can't I just gesture to the camera instead of touching the screen?" I wondered.

People say necessity is the mother of invention. I often think that impatience has sired as many inventions as necessity. In 2010 I started going to developer events to scope out use cases of real-time camera input. This kind of thing is now referred to as "augmented reality" where the interaction of a computer overlays some aspect of our interaction with the world outside the computer itself. At one of these events, I met an inspirational computer vision engineer named Nicola Rohrseitz. I told him of my thoughts that we should have a touchless-mouse input for devices that had a camera. He was thinking along the same lines. His wife played stringed instruments. Viola and cello players have trouble turning pages of sheet music or touching the screen of an iPad because their hands are full as they play! So gesturing with a foot or a wave was easier. A gesture could be captured by tracking motion rendered through light color shifts on pixel locations of the camera chip. He was able to track the shift of pixel color locally on the device and render that as input to an action on the iPad. He wasn't tracking the hand/foot directly, he was post-process analyzing the images after they were written into random access memory (RAM). By doing this on device, without sending the camera data to a web server, you avoid any kind of privacy risk of a remote connection. So having the iPad think about what it was seeing, it could interpret the input as a command and thereafter turn the page of sheet music on his wife's iPad. He built an app to to achieve this for his wife's use. And she was happy. But it had much broader implications for other actions.

What else could be done with signals "interpreted" from the camera beyond hand waves we wondered? Sign language was the obvious one. We realized at the time that the challenge was too complex then because sign language isn't static shape capture. Though ASL alphabet are static hand shapes, most linguistic concept signs have a hand position shifting a certain direction over a period of time. We couldn't have achieved this without first achieving figure/ground isolation. The iPad camera at that time did not have a means for depth perception. Now a decade later, Apple has introduced HEIC image capture (a more advanced image compression format than JPEG) with LIDAR/depth information that can save layers of the image available, much like the idea of multiple filter layers in a Photoshop file.

Because we didn't have figure/ground isolation, Nicola created a generic gesture motion detection utility which we applied to allow users to play video games on a paired device by use of hand motions and tilting rather than pushing buttons on a screen. We decided it would be fun to adapt the tools for distribution with game developers. Alas, we were too early with this particular initiative. I pitched the concept to one of the game studios in the San Francisco Bay Area. While they said the game play concept looked fun, they said politely that there have to be a lot more mobile gamers before there would be demand among those gamers to play in a further augmented way with gesture capture. The iPad had only recently come out. There just wasn't any significant market for our companion app just yet.

Early attempts to infer machine models of human or vehicle motions would overlay an assumed shape of a body over a perceived entity in the camera's view. In a depiction of a video intake of a driving car, it might be inferred that every object in the field of view represented by a moving object is a car. (So being a pedestrian or biker in the proximity of self-driving cars became risky as object and behavior assumptions of the seeing entity predicted different behaviors than pedestrians and bicyclists exhibited.) In a conference demo on an expo floor, it is likely that most of what the camera sees are people, not cars. So the algorithm can be set to infer body position represented by the assumed skeletal overlays of legs related to bodies, and presumed eyes atop bodies. The purpose of this program pictured below was to be used in shop windows to notice when someone was captivated by the displayed items in the window. For humans near, eyes and position of arms were accurately projected. For humans far away, less so. (The Computer Electronics Show demo did not capture any photographs of the people moving in front of the camera. I captured that separately with my camera.)

Over the ensuing years, other exciting advancements brought the capture of hand gestures to the mainstream. With the emergence of VR developer platforms, the need for alternate input methods became even more critical than the early tablet days. With the conventional technique of wearing head-mounted-displays (HMDs) and glasses, it became quite obvious that conventional input methods like keyboard and mouse were going to be too cumbersome to render in the display view. So rather than trying to simulate a mouse and keyboard in this display, a team of developers at LeapMotion took the approach of utilizing an infrared camera which could detect hand position, then infer knuckle and joint positions of the hands which could in turn be rendered as input methods to any operating system to figure out what they hands were signaling for the OS to do at the same time as they were projected into the head-mounted-display. (Example gesture captures could be mapped to commands for grabbing objects, gesturing for menu options, etc.)

The views above are my hands detected by infrared from a camera sitting below the computer screen in front of me, then passed into the OS view on the screen, or into a VR HMD. The joint and knuckle positions are inferences based on a model inside the OS-hosted software. The disadvantage of LeapMotion was that it required an infrared camera to be set up and for some additional interfacing challenges through the OS to the program leveraging the input. But the good news was that OS and hardware developers noticed and could pick up where LeapMotion left off to bring on these app-specific benefits to all users of next generation devices. Another five years of progress and the application of the same technology in Quest removes the x-ray style view of the former approach with something you can almost infer as realistic presence of one's own hands.

 
Hololens and Quest thereafter merged the former external hardware camera into the HMD directly facing forward. This could then send gesture commands from the camera inputs to all native applications on the device, obviating the need for app developers to toil with the interpretive layer of joint detection inside their own programs. In the Quest platform, app developer adoption of those inputs is slow at present. But for those that do support it, you can use "Hands API" to navigate main menu options and high-level app selection. A few apps like Spatial.io (pictured above) take the input method of the Hands API and allow the use of the inferred hand position to replace the role formerly filled by hardware controllers for Spatial content and motility actions. Becacuse Spatial is a hosted virtual world platform, the Hands API offers the user a capability to navigate within the 3D space through more direct hand signals. This lets the user operate in the environment with their hands in a way resembling digital semaphore. Like Spider Man's web-casting wrist gesture, a certain motion will teleport the user to a different coordinate in the virtually-depicted 3D space. Pinching fingers allows command menus to come up. Hovering over an option and letting go of the pinch selects the desired input command. The entire menu of the Spatial app can be navigated with hand signals much like the Spielberg film Minority Report's futuristic computer interfaces. It takes a bit of confused experimentation before the user's neuro-plasticity rewires the understanding of the new input method. (The same way learning abstract motions of the mouse cursor or game-pad controls require a short acclimatization period.)
 
This is great advancement for the minority of people reported to be putting HMDs on their heads to use their computers. But what about the rest of us who don't want to have visors on our noggins? For those users also we can anticipate computer input from our motions in front of the machine of our choice too, very soon. Already, the backward-facing camera in iOS devices detects full facial structure of the user. The depth vision of that camera enables mirroring of the shape of our facial features such that it can be used in a similar way that old skeleton keys precisely matched the internal workings of bolt locks. Simulating the precise shape of your face, plus the pupil detection of your eyes looking at the screen, is trustworthy indication that you are awake and presently expecting your phone to awaken as well. Pointing my camera at a photo of me doesn't unlock the phone, nor would someone pointing my phone at me while I'm not looking at it. As a fun demonstration of this capability, new emoji packs called "memoji" allow you to enliven a cartoon image of your selection with the CGI animation by mirroring your facial gestures. Cinematographers have previously used body tracking to enable such animation for films including Lord of the Rings and Planet of the Apes. Now everybody can do the same thing with position mirroring models hosted in their phones.

The next great leap of utility for cross-computer communication as well as computer programming will be enabling the understanding of other human communication beyond what our faces and mouths express. People video-conferencing use body language and gesture through the digital pipelines of our web cameras. Might gestural interactions be brought to all computers allowing conveyance of intent and meaning to the OS for command inputs?

At a recent worldwide developer convention, Apple engineers demonstrated a concept of using machine pattern recognition to simulate gestural input commands to the operating system extending and expanding the approach from the infrared camera technique. Apple's approach uses a set of training images stored locally on the device to infer input meaning. The method of barcode and symbol recognition with the Vision API pairs a camera-matched input to a reference database. The matching database can of course be a web query to a large existing external database. But for a relatively small batch of linguistic pattern symbols such as American Sign Language, a collection of reference gestures can be hosted within the device memory and paired with the inferred meaning the user intends to convey for immediate local interpretation without a call to an external web server. (This is beneficial for security and privacy reasons.)

In Apple's demonstration below, Geppy Parziale has the embedded computer vision capability of the operating system to isolate the motion of two hands separate from the face and body. In this example he tracked the gesture of his right hand separately from the left hand making the gesture for "2." Now that mobile phones have figure/ground isolation and the ability to isolate portions of the input image into segments, enormously complex gestural sign language semiotics can be achieved in ways that Nicola and I envisioned a decade prior. The rudiments of interpretation via camera input can now represent the shift of meaning over time that forms the semiotics of complex human gestural expression.

 

I remember in high school going to my public library and plugging myself into a computer, via a QWERTY keyboard, to try to learn the language that computers expect us to comprehend. But with these fascinating new transitions in our technology, future generations may be able to "speak human" and "gesture human" to computers instead of having us spend years of our lives adapting to them! 

My gratitude, kudos and hats off to all the diligent engineers and investors who are contributing to this new capability in our technical platforms.

 

Friday, September 9, 2022

Looking it up with computer vision

My mother introduced me to a wide range of topics when I was growing up. She had fascinations with botany, ornithology, entomology and paleontology, among the so-called hard sciences. As she was a teacher, she had adapted certain behaviors she'd learned from studying child development and psychology in her masters degree program on the best way to help a young mind to learn without just teaching at them. One of her greatest mantras from my childhood was "Let's look it up!" Naturally she probably already knew the Latin name for the plant, animal or rock I was asking about. But rather than just telling me, which would make me come to her again next time, she taught me to always be seeking the answers to questions on my own. 

This habit of always-be-looking-things-up proved a valuable skill when it came to learning languages beyond Latin terms. I would seek out new mysteries and complex problems everywhere I went. When I traveled through lands with complex written scripts that were different from English, I was fascinated to learn the etymologies of words and the way that languages were shaped. Chinese/Japanese script became a particularly deep well that has rewarded me with years of fascinating study. Chinese pictographs are images that represent objects and narrative themes in shape representation rather than in sound, much like the gestures of sign language. I'd read that pictographic languages are considered right brain dominant because understanding them depends of pattern recognition rather than decryption of alphabetic syllables and names which are typically processed in the left brain. I had long been fascinated by psychology, so I thought that learning a right brain language would give me an interesting new avenue to conceive language differently and potentially thereby think in new ways. It didn't ultimately change me that much. But it did give me a fascinating depth of perspective into new cultures.

Japanese study became easier by degrees. The more characters I recognized, the faster the network of comprehensible compound words accelerated. The complexity of learning Japanese as a non-native had to do with the idea of representing language by brush strokes instead of phonemes. To look up a word you don't know how to pronounce, you must look up a particular shape within the broader character, called a radical. You then look through a list of potential matches by total brush stroke count that contain that specific radical. It takes a while to get used to. I'd started, while living in Japan with the paper dictionary look-up process, which is like using a slide rule to zero in on the character which can then be researched elsewhere. Computer manufacturers have invented calculator-like dictionaries that sped up the process of search by radical. Still it typically took me 40-60 seconds with a kanji computer to identify a random character I'd seen for the first time. That's not so convenient when you're walking around outside in Tokyo. So I got in the habit of photographing characters for future reference when I had the time for the somewhat tedious process.

Last month I was reviewing some vocabulary on my phone, when I noticed that Apple had introduced optical-character-recognition (OCR) into the operating system of new iPhones. OCR is a process that's been around for years for large desktop computers with supplemental expensive software. But having this at my fingertips made the lookup of kanji characters very swift. I could read any text through a camera capture and copy it into my favorite kanji dictionaries (jisho.org or imiwa app). From there I could explore compound words using those characters and their potential translations. Phones have been able to read barcodes for a decade. Why hadn't it been applied to Chinese characters until now? Just like barcodes, they are a specific image block that has a direct reference to a specific meaning. My guess is that recognizing barcodes had a financial convenience behind it. Deciphering words for poly-linguists was an afterthought that was finally worth supporting. This is now my favorite feature of my phone! 

What's more, the same Vision API allows you to select any text from any language and even objects in pictures and send it to search engines for further assistance. For instance, if you remember taking a picture of a tree recently, but don't know what folder or album you put it in, the Spotlight search can allow you to query across your photo library on your phone even if you never tagged the photo with a label for "tree." Below you can see how the device-based OCR indexing looked for the occurrence of the word "tree" and picked up the image of the General Sherman Tree exhibit sign in my photo collection of a trip to Sequoia National Park. You can see how many different parts of the sign there were where the Vision API detected the word "tree" in a static image. 

But then I noticed that even if I put in the word "leaf" in my Spotlight search, my photos app would pull up images that had the shape of a leaf in them, often on trees or nearby flowers that I had photographed. The automatic semantic identification takes place inside of the Photos application with a machine learning process, which then has a hook to show relevant potential matches to the phone's search index. This works much like the face identification feature in the camera which allows the phone to isolate and focus the image on faces in the viewfinder when taking a picture. There are several different layers of technology that achieve this. First identifying figure/ground relationships in the photo, which is usually done at the time the photo is taken with the adjustable focus option selected by the user. (Automated focus hovers over the viewfinder when you're selecting the area of the photo to pinpoint as the subject or depth of focus of the photo.) Once the subject and ground can be isolated from the background, a machine learning algorithm can run on a batch of photos to find inferred patterns, like whose face matches to which person in your photo library. 

From this you can imagine how powerful a semantic-discovery tool would be if you had such a camera in your eye glasses, helping you to read signs in the world around, you whether in a foreign language or even your own native language. It makes me think of Morgan Freeman's character "Easy Reader" who'd go around New York looking for signs to read in the popular children's show Electric Company. The search engines of yester-decade looked for semantic connections between words written and hypertext-linked on blogs to string together. This utility we draw on every day uses machine derived indication of significance by the way people write web pages about subjects then based on which terms the blog authors link to which subject webpages. The underlying architecture of web search is all based on human action. Then the secondary layer of interpretation of those inferences is based on the amount of times people click on results that address their query well. Algorithms are used to make the inferences of relevancy. But it's human authorship of the underlying webpages and human preference for those links thereafter that informs the machine learning. Consider that all of web search is just based on what people decide to publish to the web. Then think about all that is not published to the web at present, such as much of our offline world around us. So, you can just imagine the semantic connections that can be drawn through the interconnectedness of our tangible world we move through everyday. Assistive devices that see the code we humans use to thread together our spatially navigable society are a web of inter-relations that will be easily mapped by the optical web crawlers we employ over the next decade.

To see test out how the Vision API deals with ambiguity, you can throw a picture of any flower of varying shape or size into it. The image will be compared to potential matches that can be inferred against a database of millions of flowers in the image archives of WikiCommons the public domain files which appear on Wikipedia. This is accessed via the "Siri knowledge" engine at the bottom of the screen on your phone when you look at an image (See below the small star shape next to "i"). While WikiCommons is a public database of free-use images, it could easily be expanded to any corpus of information in the future. For instance, there could be a semantic optical search engines that only matches against images in the Encyclopedia Britannica. Or if you'd just bought a book on classic cars, the optical search engine could fuzzy-match input data from your future augmented reality lenses to only match against cars you see in the real world that match the model type you're interested in.


Our world is full of meanings that we interpret from it or layer onto it. The future semantic web of the spatial world won't be limited to only what is on Wikipedia. The utility of our future internet will be as boundless as our collective collaborative minds are. If we think of the great things Wikimedia has given us, including the birth of utilities like the brains of Siri and Alexa, you can understand that our machines only face the limits that humans themselves impose on the extensibility of their architectures.