The tremendous computing horse-power and bandwidth offered by cloud, 5G and HPC servers is making it tempting for solution providers to focus on centralized computing strategies. But, in a world of connected devices and sensors, edge and distributed computing makes a big difference in terms of quality, cost and real-time performance. Distributed computing is the key for next generation solutions that demand AI based data analytics. Currently the supply of AI capable edge chips is very thin. Artificial-intelligence-of-things (AIoT) – a combination of IoT and AI – is also driving new low power AI chip requirements and platforms.
“By being close to the data source, the edge devices have access to original data without losses and bandwidth restrictions of the networks.”
Edge, Central and Distributed computing
By being close to the data source, the edge devices have access to original data without losses and bandwidth restrictions of the networks. The quality of data makes a big difference in AI-based data analytics and decision-making. But, Edge-only computing with compute and storage resources restricted to remote locations alone can have its issues. The flexibility in up-gradation, software options, computing capabilities, and scalability of resources is limited in edge only computing model.
Centralized computing offers high-performance advantages of cloud and servers. They can give high flexibility in up-gradation, software options, and scalability of resources. But some of these advantages are negated by the data quality, Data bandwidth restrictions, and latency issues. Without any screening, Huge data can be unnecessarily brought to the servers, making computing an expensive and tedious affair. Sometimes it is not safe or legal to carry data to remote locations for analytics. So data management is also an issue for sensitive content.
Distributed computing is the best option when it comes to bandwidth and connectivity-limited scenarios. Central computers are the key to making decisions based on thousands of edge devices. Also, they offer higher levels of AI capabilities as compared to power-limited edge computing devices. But, in a distributed computing model, all data do not have to reach central computers. Edge computing devices can be used for raw data-based analytics and generate metadata for more high-level analytics in the server. The data sent to the servers are post-processed data and protected. These advantages make distributed computing most effective in most of the solutions.
Edge computing – the semiconductor gap
In both Edge and Distributed computing scenarios, solution makers are always hit a roadblock with System-on-Chips (SoCs). Unlike high-performance central servers and cloud computers, AI/ML hardware options are not many in the case of power-limited edge hardware. In solutions that demand data analytics at the edge, low power requirements and cost constraints urge new chip architectures. AI and ML add significantly to the problem. Due to power and cost reasons, one cannot afford to possess too many redundancies in edge platforms. But there is a severe scarcity of these edge-friendly chips. The recent semiconductor supply issues are also adding fuel to the problem. It drives many solution makers to continue to depend on traditional general processing chip options like embedded CPUs and GPUs and make significant compromises in their solutions.
Why domain specific?
There are a few AI/ML general-purpose chips announced from multiple sources. But these chips are particularly general-purpose in nature, and they emphasize more neural computing TOPs (Terra-operations per second) based on a set of hand-picked AI algorithms than solution-specific overall computing speed. General-purpose designs do not work best for edge scenarios. For example, a surveillance on-camera analytics chip needs advanced video encoding, image enhancements, and pixel intensive computer vision capabilities along with neural TOPs for AI inference. Individual edge scenarios need to have their own custom solution-specific computing architectures. It makes edge computing remarkably demanding for semiconductor designs. Also, the supply of existing AI/Ml chip designs is also strategically controlled. In general, solution providers are enforced to make significant compromises on their architectures. The limitations of conventional chips are leading to new ways of solution building. Winners are those who identify, build and integrate the right combination of edge computing chips for their distributed computing model. It leads to the entrance of new players in the semiconductor space and new business models involving disruption of existing value chains. The thick lines once endured between chip makers, software providers, product and solution makers are going thin very fast, thanks to the latest disruptive technologies like AI.