Just noticed the Gartner’s Magic Quadrant for Indoor Location Services, 2019 and one thing glared at me, again. Vacancy in the leadership QUADRANT. It was like this in 2018 too!
By 2022, 65% of enterprises will require indoor location asset tracking (both people and equipment) to be part of all access layer infrastructure communication decisions (up from less than 10% today) and yet there is no LEADER. Or, shall I say there is clear OPPORTUNITY.
The real challenge is that no one has yet definitively cracked the code on how to make adoption easy. Google, Apple, Microsoft … they all see the indoor space as the next BIG frontier but there is no one that can help with BLUE DOT easy.
The struggle starts from the moment you start looking to install the RTLS infrastructure. Whether the use is way finding, proximity services, personalized experience or asset tracking—maximizing the value of a solution requires a pervasive deployment.
How will you power all the RTLS receivers or Transmitters?
How will you install cables everywhere?
If you decide to use Battery based RTLS receivers/transmitters than you will end up doing significant battery management. Periodically changing batteries by going from receiver to receiver (or transmitter to transmitter) in person, finding and replacing a malfunctioning receiver (transmitter), changing parameters, or security updates – nothing is easy. With batteries, you also end up establishing processes for re-charging, storing, and disposing of the batteries you’ll use.
Many Wi-Fi companies added BLE (Bluetooth low energy support) in their Wi-Fi Access Points so that the access points can act as iBeacon transmitters as well as BLE receivers to alleviate the need of additional battery or installations. This is a step in the right direction indeed, however, the location accuracy is “limited”. Furthermore, these solutions require additional access points from same vendor or need to be augmented by battery based solutions. Recent AI based solutions such as https://www.themarysue.com/wifi-positioning-system-mit/ suffer from accuracy too.
There’s an untapped universe of data around physical locations that businesses can utilize and the right approach can transform real-time location-based services across retail, hospitality, healthcare, industrial and many more industries.
Data is the currency of the new world, and almost every week it gets stolen. One day you hear about hackers successfully stealing nearly $100 million from Bangladesh’s central bank, another day it is about millions of user identities stolen from Target and another day it is about hacktivists” piercing online firewalls to make political statements. Cyber attacks have been becoming worse.
A Google+ security bug gave outside developers access to the private data of hundreds of thousands of the social network’s users between 2015 and March 2018, according to a Wall Street Journal report. Google neglected to report the issue to the public, allegedly out of fear that the company would face regulations and damage to its reputation, according to sources and documents obtained by the Journal.
Adidas announced in June that an “unauthorized party” said it had gained access to customer data on Adidas’ US website. Currently, the company believes only customers who shopped on and purchased items from the US version of Adidas.com may have been affected by the breach.
Delta used the same online support service as Sears and was also affected by the reported breach. The airline said customer payment information may have been vulnerable but did not estimate how many of its customers were affected.
And many more.
Current Cloud Paradigm is failing for the protection of personal data.
The cybercriminals are well funded, and many cloud companies storing our data are not even staffed to protect. The surreptitious online transfer of files, including credit card numbers and corroborating information, is a robust business valued at 120 billion dollars a year, according to CreditCards.com. Yes, much larger addressable market than the product idea most of us are working on! Not only that, the attackers are far quicker to act on vulnerabilities than the organization. According to the Verizon Data Breach Report, more than 70 percent of attacks exploit known vulnerabilities with available patches and that too within minutes of their becoming public knowledge.
On top of that, hackers and cyber-adversaries are leveraging automation technology to launch strikes. They use machine learning and artificial intelligence techniques to streamline their operations.
The EU General Data Protection Regulation (GDPR) is a significant first step in data privacy regulation, but it does not solve the problem. Making cloud more robust or imposing penalties to the providers do not help either. The solution has to be outside the centralized cloud.
Time to DeNet?
DeNet, UnCloud, Edge Computing, Ambient Computing, Fog Computing and many names for this. The idea is to not store any personal data in the “centralized” cloud. The personal data remains in my device, is block chained and then network connectivity protocols build a “cloud” – a DeNet cloud that supports functionality as we are used to today or desire.
This is counter-intuitive for the personalized experience by Google or Facebook ads. But it is not. It is just another way of implementing the database.
The user then has full control of visibility of the “personal” data. Is it safer? Well, yes, with right blockchain technology, the personal data is not only safer, but it is also traceable, and the users can get remunerated for its use. It is certainly a superior strategy than the “Sorry, your data got hacked early this year and now let us pray.”
With quantum computing making inroads, the damage on public key cryptography will be catastrophic. In the quantum world, hackers will be able to crack several of today’s encryption techniques such as RSA and ECC within days if not hours. Cloud will become even more un-trustable in next five years.
The architecture has to be changed to protect us and our future generations. I think the time to #DeNet is here now. If you are not looking into how your application should work in this new paradigm, you should perhaps start looking into it.
The moment the neural network we built for making a Wi-Fi network smart started producing amazing results, our focus shifted to the mystery that was unfolding right in front of our eyes. We had built these models, but we don’t know how they work. No one really knows how it does what it does. The only thing we see is that its performance is superhuman.
Brain is very complex. It is a complicated deep neural network. It has lots of memory and is capable of pattern recognition, prediction, imagination, and all sorts of parallel computation, and yet no one really knows how it works.
New advancements like diffusion imaging have given scientists insights into the brain’s inner workings and enabled them to “see” what is going on inside the brain when people are engaged in learning. There is evidence that the learning and memory are made by the strengthening and weakening of connections among brain cells. However what is stored is still a mystery. You can’t look inside the brain and tell that this person knows the meaning of supercalifragilisticexpialidocious. In fact, the brain is made of stuff that dies when you poke it around. It only learns via the received sensory information, the emotional reactions and by processing experiences and then mysteriously stores the learning. You can’t transfer the learning to it or extract the learning from it also. No copy and paste!
And, what we have built is something very similar too. Essentially a black box.
The Neural network models are brain-inspired, and in neural networking, instead of a programmer writing the commands to solve a problem, the neural network generates its own algorithm based on example data and the desired output. It basically builds an advanced algorithm that is embedded in the behavior of thousands of simulated neurons, arranged into hundreds of intricately interconnected layers and the behavior at each neuron tweaked by the back-propagation.
Let me give you an example of an image recognition neural network. The task here is to look at an image and then tell whether it is a ‘male’ or ‘female.’ The task that is trivial for a human brain is not so trivial when a neural network has to do it. If you look at the pixel image, there is no pattern in the pixels. Thus, the first step is training this neural network. Without going into too technical, assume that the neural network is nothing but multiple layers of neurons with weights assigned to each neuron. The first layer has thousands of neurons (one neuron for each pixel in the image), and the last layer has only two neurons (one becomes ‘1’ when the image in the input layer is ‘male’, and the other becomes ‘1’ when the image in the input layer is ‘female’. For convenience, let’s call them the ‘male’ and ‘female’ neuron respectively.) Now, the creator of the neural network feeds millions of images to this model and the expected result for each image, i.e., which neuron in the last layer should become ‘1’ for that image. If the input image is ‘male,’ then the ‘male’ neuron should be ‘1,’ and if the input image is ‘female’ then the ‘female’ neuron should be ‘1’. The learning begins. The neural network uses the initial weights and runs a formula to compute the value of the two neurons in the last layer. Its goal is to quantify how good or bad it is performing. Did it compute ‘1’ for the ‘male’ neuron for the ‘male’ image or not? Or, did it compute ‘1’ for the ‘female’ neuron for the ‘female’ image or not? If not then it adjusts the weights, and it keeps changing weights of neurons in all layers through multiple iterations until it is able to conclude the “male” neuron “1” for all male images and “female” neuron “1” for all the female images. Now, when you feed a new image that has never been fed before, the model runs the formula on those weights and “predicts” whether the input image is “male” or “female.” The neural network has created an algorithm. Even the person who created and trained the network does not know what is being detected at the intermediate stages of the process—or why the model reaches the conclusion that it does. The model works brilliantly but what the creator has created is a black box.
Not sure if you followed the second game of historic Go match between Lee Sedol, one of the world’s top players, and AlphaGo, the AI program built by Google. AlphaGo made a move that astonished everyone – Every reporter, photographer, commentator and even Lee Sedol himself. Fan Hui, the European Go champion who had lost five straight games to AlphaGo earlier, was also completely blown away. “It’s not a human move. I’ve never seen a human play this move,” he said. Indeed, the move that didn’t make sense to the humans changed the path of play, and AlphaGo went on to win the game.
For AlphaGo or an AI-driven Wi-Fi network, it perhaps does not matter why they won, or why the network is performing better, however in many applications, we can not accept arbitrary decisions by AIs, that we don’t understand. If a doctor cannot explain why AI arrives at a specific decision, the doctor may not be able to use its conclusions as a diagnostic tool. If the AI system cannot justify why the loan is being rejected, the financial institution may not be able to use it for loan processing. If the AI system cannot be accountable, it cannot be used in self-driving cars as the assignment of responsibility is not clear. And so on. If a service is going to be augmented with “AI,” we need the rationale of how the algorithm arrived at its recommendation or decision.
The AI service needs to be built responsibly. It needs to be explainable. It needs to be transparent. It needs to be reliable.
And, we should be able to prove that its “fair.” AI systems are only as good as the data we put into them. Bad data can contain implicit racial, gender, or ideological biases. You can notice from the image recognition example I gave above that the human bias has the potential to creep in. The “creator” labels the input images as ‘male’ or ‘female’ based on the creator’s bias. This is one of the worst problems in AI. Human bias is a source of undesirable decision-making logic to creep into a neural network. We should be able to determine if the neural network has done ‘discrimination.’
A human can get it wrong, but then they explain. And similarly, when a computer gets things wrong, we need to know why.
In fact, for the field of AI to reach any measurable sense of maturity, we’ll need methods to debug, error-check, and understand the decision-making process of machines. The lack of trust was at the heart of many failures of one of the best-known AI efforts. Artificial intelligence (AI) is a transformational $15 trillion opportunity, but without explainability, it will not reach any measurable sense of deployment.
Explainability is an essential element of the future where we will have artificially intelligent machine partners. In the follow-up post, I will go over the state of the art of explainable AI as well as various approaches to build it.
AI aims to replace us, not simply make a specific industry more efficient
It Impacts every industry, and
The pace of change is exponential
If you’re pretty much anyone, you will need to make time to learn more about artificial intelligence and acquire skills that will help you to be relevant, lead as well as remain differentiated in the era of AI. Most people are trying to “wing it”, “being an ostrich” or “not finding time to get their hands wet” but that might prove to be a costly mistake as this is a complex technology that will be part of our lives for years to come.
It may appear that people with degrees in computer engineering, computer science, mathematics, and physics will be at a great advantage and now we all have to learn that – it is NOT true.
How to learn and what exactly to learn depends on who you are, what stage you are in career and what career you are pursuing.
Honing valuable human skills
Psychologist, Howard Gardner had published a book in 1983, titled, “Frames of Mind: The Theory of Multiple Intelligences,” in which he explains, “there exists a multitude of intelligence, quite independent of each other, and that each intelligence has its strengths and constraints.”
The field of artificial intelligence is about building these behavioral & cognitive traits in machines.
The era of AI will bring a renewed focus on human intelligence as the primary purpose of AI is to augment human.
We all should focus on these skills. We can see things beyond the biases. We have the capability to make unbiased/objective decisions for various ambiguous situations that may arrive. We also have the ability to think out of the box – think outside of everything that we have learned or been trained. And, lastly, Adaptability is accepting the changes that we face and move forward by making the most out of those changes. The mundane and boring will removed from your life and like any other change it will bother us but be open to adapt. Not sure if Charles Darwin said these words as is but ‘It is not the most intellectual of the species that survives; it is not the strongest that survives, but the species that survives is the one that is able to adapt to and to adjust best to the changing environment in which it finds itself’. These skills will become very valuable in the era of AI.
Understanding AI terminology and principles
Yes, the people with the computer programming and data science skills will code artificial intelligence models, write APIs and a variety of frameworks, however, AI and applications are not about writing the frameworks, but humans will apply your knowledge to solve problems or augment our lives. The main thing that you will need to do is be aware of the terminology and principles. We all will be interfacing with these AI systems going forward. It is almost like the technologies such as Wi-Fi, GPS, printing, calorie counts that are part of our day to day lives.
Plenty of books, websites, courses, classes and certificates/degrees to gain knowledge in this field. As it is a new and complex field, I would suggest focussing more on ‘in person’ or ‘hybrid’ learning models than just online courses.
If you are an investor, you don’t need to be aware of all the technical details, but you do need to have a basic understanding of the concepts. You need to be able to separate out the fluff from the real. Today almost everyone is using the terms machine learning, deep learning etc. very casually. Adding a voice agent that uses natural language processing API from Amazon is not AI – its just programming. You have to be able to ask very specific questions, for example, how they use the training data vs. inferences, will GPUs make a difference, if yes why, etc.
As an executive or leader, you need to understand what does it mean for your business? And how can your company take advantage of it? How can you lead it? I recommend taking an in-person half day or 1-day workshops to help you answer these questions. AI presents real business opportunities in improving the top line revenues as well as it enables organizations to break prevailing trade-offs between speed, cost, and quality and perform tasks traditionally could only be performed by humans. For executives, you want to learn how you can use AI to support your business needs, how you can devise a strategy to gain a competitive edge and sustainable growth.
For engineering mindsets, if you want to be the ones who will be building models and implementing them, algebra, calculus, algorithms, statistics, programming, data science, neural networks, machine learning, and deep learning, etc. are essential.
For non-engineers, you need to ensure that you have skills to interface with computers and technology. I would highly recommend taking some classroom training for the “basics” or joining some “meetups.”
Be attentive of the gaps where the human in the loop is required
Human in the loop will be required almost everywhere – sometimes to think and create what is needed, sometimes to help when the software is uncertain of the answer, sometimes to measure and assess if system behavior and responses are functioning as expected, and sometimes to troubleshoot when the machine is not performing as desired.
Troubleshooting – Troubleshooting AI will be challenging. A neural network is just like our brain. You can’t cut your head off and see what you are thinking, but that’s precisely the job of the human in the AI world. Analyzing failures in the AI system and correcting them in the most efficient and optimized manner is the skill that we need to learn to stay in the race of fierce competition.
Malfunction – Machines do break down. Since AI system breakdown can have a severe impact on business, one’s ability to maintain proper functioning during system failures is critical. The humans will need to have capabilities to take over and manage the system in the event of AI system failure.
Emotional Alignment – One of the most important things to monitor is the alignment of AI system with the emotional needs of the business. Always remember, that AI systems are machines. Machines can achieve more, can respond appropriately to more complicated situations, and handle more parameters of variance. However, they lack passion. And, dispassion is not always a strength. For example, during one of the terrorist attacks in the UK in June 2017, the demand for UBER taxi suddenly soared. Based on the typical algorithm of increased demand, UBER system inherently hiked the rentals. This lead to a strong detest and condemnation from around the world. This classic example sheds lights on the need and importance for humans to intelligently monitor the AI system and brings them in order whenever needed. The humans will play an active role in feeding emotional-decision-making into the machines for making human-like decisions.
Unlearning – By nature, human needs evolve and never cease to do so. This constant urge to achieve more ushers new learning at every stage. Humans also perish, and hence, learning begins afresh with every generation. On the contrary, machines retain the old knowledge – unless guided by means and ways to acquire the new learning. Another aspect is ‘unlearning.’ Humans are not only capable of selective unlearning, but also identifying the need for it. This unique ability gives them the power to handle and manage the machines so that they too can ‘learn to unlearn’ and behave appropriately as per the needs of organizations. In crucial and complex situations, humans play a vital role in continually updating the learning databases of machines. This challenging task requires an in-depth knowledge to control and feed the machines as per the desired outcome in favor the organizations. In this important context of decision making, we can safely say that the ‘unlearning’ capability of humans at times, can beat smartest of machines that are just designed to learn.
Undoubtedly, AI will certainly replace people to some extent and eliminate certain jobs. Also, humans will involuntarily lose their control over machines and the systems to AI. However, as you see in previous sections, AI will open up new opportunities and new skills that humans will need to learn. The dog is a man’s best friend and so will be machines! With mutual collaboration, we will be able to achieve more than either could achieve alone. Welcome to the era of Hybrid Intelligence!
Neural Networks for Route Management for Driverless Car Fleet
Managing a fleet of autonomous vehicles for the purpose of mobility-as-a-service – competing with likes of Uber or Black cab, poses another set of challenges beyond the singleton self-driving car. In this session, we will discuss how artificial neural networks will play a critical role in fleet management operations such as route computation.