When I was a kid I loved a cartoon called “The Jetsons”. This show was about a futuristic family living in a futuristic world using futuristic gadgets and gizmos. Their world was filled with a myriad of machines. Some of these machines were simple but quite a few of them were robotic. The most prominent of these was the Jetson’s maid Rosie. Much of the humor on this show came about with these “intelligent” machines attempting to deal with all the odd and unpredictable things that can happen in the course of dealing with humans and the human experience.
As a kid I just watched the show to be entertained and I never really thought about the implications of what I was watching. I never considered that these machines were receiving massive amounts of information, trying to make sense of all that data, and then trying to selecting an appropriate response based upon some algorithm they were running and then possibly storing the data and their response for future consideration.
When I was in engineering school one of my favorite professors told my class that computers are essentially very stupid. They only do one thing at a time. They only do what they are told. But, they do it very fast, and they don’t get distracted like people do.
Since then computers have grown by leaps and bounds, in sophistication. I carry a cell phone in my pocket today that can run circles around the first computer I had that weighed thirty pounds and covered an entire desktop. However, the basic premise that computers are stupid and only do what they are programmed to do is still true … or is it?
Until recently, computers were, for the most part, governed by their central processing unit or CPU. A CPU is the brains of a computer and its speed and ability is what, primarily, determines the usefulness of the computer. The CPU is what is used for general tasks like word processing , surfing the web, reading and sending emails, and other basic things like that.
In recent years graphics processing units, or GPUs, have come to the forefront. On the surface, GPUs are simply souped-up video controllers. They deal with rendering graphics on displays for the user to see. Many of today’s games require intense graphical calculations dealing with advanced physics calculations and three-dimensional renderings and rotations. These are all things at which GPUs excel. However, it is not all fun and games, there are other programs like medical imaging and computer-aided design applications that make use of the abilities of GPUs as well.
It turns out , though, that GPUs are not limited to working with graphical data. Because of their ability to handle massive amounts of data while simultaneously performing complex operations on that data at ridiculous speeds, GPUs have emerged as amazing tools in the field of artificial intelligence (or AI) and machine learning. It is the GPU that is at the heart of most of the automated technology gadgets we see today. The self-driving cars, ride sharing apps, facial recognition, voice recognition, fingerprint recognition, spam filtering, and targets ads on Facebook and other sites all use AI and machine learning.
NVIDIA has made a name for itself in this area. It has established itself as the undisputed leader in the GPU market. NVIDIA GEFORCE products are lusted after by gamers around the globe. NVIDIA Quadro products are used by professionals everywhere. NVIDIA Titan and Tesla are the kings of AI and Machine Learning. However, I want to talk about a product in the AGX line that NVIDIA released in December of 2018. NVIDIA calls it Jetson AGX Xavier, but those in the AI and machine learning realm call it the “Holy Grail”.
It does not top NVIDIA Titan/Tesla in raw GPU power, but Jetson is way more than just a GPU. NVIDIA calls it a module, but it is really a solution. It is a stand-alone entity that is meant to be used as an autonomous machine in whatever application a customer can think to place it. It is quickly becoming the gold standard for AI and machine learning solutions.
Hardware-wise NVIDIA has packed this baby with everything except the kitchen sink. You can run Linux on the 8-core Carmel 64-bit ARM CPU. For the heavy-duty AI and machine learning work you have a 512-core Volta GPU with Tensor Cores. There are two memory options available, one with 8G and one with 16G. In either case the memory is blazingly fast 256bit LPDDR4x memory. The whole solution is capable of 32TOPs of performance while consuming as little as 10W. All of this comes in a measly 105mm by 105mm form factor. To sum up, you get workstation level performance in a tiny package.
Another of NVIDIA’s hallmarks is its tools. They know that providing the best hardware means nothing if customers are unable to use that hardware effectively. NVIDIA’s JetPack SDK allows Jetson customers to get up and running in record time thus saving huge amounts in development time and cost. The SDK includes all OS images, libraries and APIs, developer tools, samples, and documentation that are needed to rapidly develop, debug, and deploy sophisticated solutions. Python, CUDA, OpenCL, OpenCV, C, and C++ are all supported
Jetson is designed specifically for robots and edge computing. As such, it is designed to handle all kinds of sensory inputs in real-time and to run all sorts of inference engines with complete autonomy and no human intervention. Cloud connectivity, navigation with path planning and collision avoidance and complex terrain following, package delivery, and industrial inspection are just a few examples of what Jetson could be programmed to do. The sky is truly the limit.
This brings us back to the Jetsons cartoon. As I think back and consider all the crazy things that happened, I conclude that either something happened in the time line that prevented NVIDIA from inventing the Jetson AGX Xavier, or the engineers of that age were idiots. I come to this conclusion because if NVIDIA had invented it and there were engineers with any skill at all, Rosie and her robotic friends might have made madcap mistakes once, but they would not have made them a second time. They would have learned from their first mistake. Further, if they were networked together as they should have been, then only one of them would have made that mistake allowing all the others to learn from it. After all isn’t that what AI and deep learning are all about … eliminating low quality decisions and replacing them higher quality ones.