One of the things I like best about the annual MEMS and Sensors Executive Congress (MSEC17) hosted by SEMI-MSIG, is being in the presence of some of the greatest minds in the industry. While the presentations are always top-notch, it’s the stimulating dinner conversations they inspire that I look forward to most. This year, I decided to leave the straightforward reporting to my journalist colleagues, and instead, give you, the reader, a seat at the table. So settle in and pull up a chair. I think you’re going to like this.
The Salad Course
A light salad of field greens with pistachios and vinaigrette was waiting for us as we took our seats. Before the discussion turned technical, we chatted about the event itself, and there was a lot to talk about: Would the culture of this tight-knit group change because it was now part of the larger SEMI engine? So far, so good, but it remains to be seen. Tragic that the venue had to change because of the California wildfires, but SEMI did an excellent job of shifting gears so quickly (hats off to Agnes Cobar and her team!) The Hayes Mansion was a lovely venue. The seating arrangement was a little tight, but the food was fantastic! It’s good that we are returning to Napa next year. Was there a different vibe without the presence of Karen Lightman? She always added a certain energy to the event that will be hard to duplicate. She’s still working in smart cities Executive Director of Metro21 – Carnegie Mellon University’s smart city initiative.) Maybe we can have her back as a speaker at a future event? — Stuff like that.
The Soup
As we tucked into the butternut squash soup, talk turned to the morning’s keynote by Lama Nachman, who heads up Intel’s anticipatory computing lab, which is tasked with collecting real-world data to better understand the environment in which MEMS and sensing technologies need to work. She talked about the importance of contextual awareness, and the importance of understanding the ubiquity of sensing. She called on the group to create sensors that are more configurable so they can be intentionally used to measure phenomena beyond what the sensor was initially designed for. For example, using accelerometers as microphones by putting them in the nosepiece on eyeglasses, where they act as vibration sensors to capture noise attenuation. This data can be then used to diagnose Parkinson’s disease, or for speech therapy.
Nachman also talked about her pet project — working with Professor Stephen Hawking to develop an assistive context-aware toolkit so he can more intuitively control his computer to communicate with the world. One of the key elements was to develop an infrared sensor that captured the movement of his cheek, which he could then use as a universal button to perform different tasks. This toolkit is now open source so others with similar disabilities can live fuller lives.
The Main Course
The surf and turf consisting of petit filet mignon and lobster tail were cooked to perfection. As dinner conversations often go, the mention of Professor Hawking segued into a discussion of how the technology drivers for the MEMS and sensors industry — things like artificial intelligence, machine learning, augmented and virtual reality, robots, and autonomous vehicles — could bring about unintended consequences. For example, triggering the onset of the singularity sooner than expected. The singularity is the point at which machines surpass humans in intelligence, and is a major concern to people like Elon Musk and Professor Hawking. One of my dining companions referenced a fascinating talk he attended last year for which Professor Hawking was the presenter. He had predicted the singularity would be reached by 2030 – a mere 14 years away. At the point that machines take control, Dr. Hawking noted, the rest of us are reduced to merely being “wildlife.” I wonder why we, as humans, would ever intentionally take technology to the point where computers will be smarter than us? (To see if we can, one person told me.) I would argue that if this is the case, we have already reached the singularity, as we are already slaves to our smartphones and ruled by technology in many aspects of our lives.
Dessert
The chocolate mousse cake was the ideal finishing touch, and the conversation got lighter and turned to one of my pet topics: autonomous vehicles, and how people feel about letting go of the wheel and joining the passenger economy. One dinner companion predicted that on the Gartner Hype Cycle, we have reached the peak of inflated expectations for fully autonomous vehicle, and that we are likely to see a long 10 years in the trough of disillusionment as the technology kinks and societal issues are worked out: for example, things winter weather’s impact on the sensors, as well as curvy roads, gaps in connectivity, and a vehicle’s ability to distinguish between a baby carriage, wheelchair, and shopping cart.
What is becoming increasingly clear, is that we cannot try to fit autonomous vehicles into today’s societal mores. The world in which driverless cars play a pivotal role will be distinctly different than the one in which we live today. In his keynote, Lars Reger, NXP, said it best: do we continue to think of a car as a horse carriage that by accident has a combustion engine? Or do we think if it as a big IT system that by accident has wheels? He said the challenge is not only to master the brain of a car, but to capture the environment around it, and that’s where sensors are critical. For Reger, the personal motivation to pursue this world is the number of lives that will be saved (1.3M) when human error is removed. For many, this reason alone is enough to do whatever it takes to transition society into embracing the passenger economy. For example, In this world, questions like auto insurance liability go away, as there is no need for car insurance.
Coffee Talk
This dinner conversation spilled over into the next day during the breaks and lunch, and inspired an hour-long chat with TDK/InvenseSense’s Peter Hartwell, who gave the final tech talk presentation, similar to the one he delivered at the European MEMS and Sensors Summit, this one titled Why Robots Made Me Love Virtual Reality. Again, I brought up the topic of unintended consequences. While many are good, like realizing the activity data captured by Jawbone fitness trackers while people are sleeping can be used to detect the impact of earthquakes and where to send first responders. Or how autonomous vehicles, robots, and telemedicine will enable the elderly to lead more independent lives, Hartwell still worries whether conveniences like automatic online shopping from our fridges, virtual reality, and voice-activated assistance will create a society of sedentary beings.
And then there are the humorous consequences, like the Alexa anecdote Matt Crowley, of Vesper, shared during his talk on voice-activated assistants. As the story goes, a six-year-old girl asked Alexa to play dollhouse with her, and so Alexa ordered her an expensive version from Amazon. A local news station got wind of the story and shared it on an evening newscast, repeating that the girl asked Alexa for a dollhouse, thereby triggering more Alexa orders from the devices that were inadvertently turned on near televisions tuned into this newscast. It seemed far-fetched, even to me, so I did a quick search, and here’s what Snopes says about it. Regardless, the notion that voice-activated assistants can be a gateway into the home is cause for concern.
Lastly, I’m concerned about the socio-economic implications of our high-tech world. Are we creating an elitist society? What about the people who live in remote areas without the adequate means to purchase these cool devices? Are we really making their lives easier?
Hartwell is very passionate about his work. While he shares many of my concerns, he says the benefits of technology always outweigh the negatives. Rather he focuses his efforts on creating positive experiences for his customers. For him, the technology is just a means to an end. His aha! moment came during a simple evening out, where he observed three generations of a family sharing videos by means of an iPhone. He likes that he helped make that happen.
I asked him my singularity question: why do we want machines that are smarter than humans? For him, the quest is noble and multifaceted: so that we can improve our health and wellness; so we can gather the data needed to impact climate change; so that we can make the world a better place. How do we prevent the bad things from happening? Hartwell offered several ways. First, we follow Isaac Asminov’s Three Laws of Robotics:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Second, Hartwell says we must take our time. The IoT is great, but we should be careful because if we mess this up, we could be set back years. We rushed into the nuclear age without considering the consequences. We unleashed the demon without thinking about it. Nuclear is the only truly clean energy, but we let it get away from us, and are still paying the price.
Parting Thoughts
While MSEC17 showcased many advances in sensing technologies and devices for applications we hadn’t even thought of yet, to me, these are the really important conversations we need to have as we move down this path together towards a smarter, more connected society. I was relieved to see that although the people in this room are instrumental in developing these technologies, they are also as concerned about the unintended consequences as I am. It’s because of forums like MSEC that we can move forward confidently, knowing these things are being discussed and considered, and that we are in good hands. ~ FvT