powering intelligent edge with
Netrasemi Edge AI SOCs
Embedding real-time AI on edge devices with a power-efficient Graph Stream Parallel Architecture.
Explore the Power of Edge-AI computing for your IoT solution with Netrasemi SOCs
Enabling optimal computing for smart IoT solutions
AI for smart IoT devices are driving the need for new chip architectures and computing platforms. Traditional system on chips offer very limited capabilities for the new world of embedded AI IoT solutions. Emergence of distributed IoT compute model with the need for real-time on-device AI (edge-AI) processing is exposing major gap in the existing chip supply chain. New AI chipsets in the market are also having major gap in the domain specific features that are critical for the overall efficiency of a resource limited edge platform. To become a leader in the solution space, you may need to accelerate your market entry with competitive SOCs that provide optimal computing model, unique domain-specific features, power-performance efficiency and competitive pricing. Netrasemi helps you to achieve these goals with our domain specific architecture (DSA), IP rich SOCs, AI development tools, flexible SDKs, platform reference designs and unique go-to-market strategies. We bring 10x improvements to your time to market.
A Unified Domain Specific Architecture (DSA) for Smart Sensors to Edge Servers
Embedding intelligence in small and tiny devices is the latest trend in IoT. The “edge-AI” strategy is making sensors smart, cheap, responsive, and independent with minimal data to be sent to central computers. Smart sensors, aggregators and Edge servers work as key compute elements in a world of distributed AIoT. Each of these edge devices demand unique domain specific features in an aggressively small form factor and power budget. This drives custom chip architectures and software framework for each device, making the job of a solution builders extremely difficuilt. Netrasemi brings in a family of SoCs powered with a unique domain specific architecture that scales from tiny edge devices to edge servers. Our SoCs are powered by a family of high-performance acceleration cores, unique heterogeneous compute fabric and a UCIe based Die-to-Die (D2D) interconnect. This architecture allows us to enable true graph-stream processing without the power performance boundaries of a silicon die. The architecture is supported with a common software framework which allows our customers to build their distributed computing solution with a unified hardware-software architecture.
Faster Time to market for AI solutions
Flexible APIs, SDKs, reference designs
Graph stream architecture
Compute Cores
Deep Neural Processor (NPU)
WHAT WE DO?
Making edge AI product development Efficient, Simple and Economical.
OUR TECHNOLOGY PARTNERS
Blog
Keep informed about what is happening in the technology domain.