How AI Edge Servers can Enable you to Run and Manage Various Edge AI Workloads with High Performance, Flexibility, and Security
Cities, factories, retail stores, hospitals, and other industries are increasingly investing in AI at the edge. This means they process data and use AI and ML algorithms at the edge to overcome bandwidth and latency limitations. AI at the edge helps them to do real-time analytics and produce faster decision-making, predictive care, personalised services, and improved business operations.
Our AI Edge servers are designed for these edge deployments, with different compact shapes that fit the environment. Our systems have the performance needed for low latency, open architecture with pre-integrated components, diverse hardware and software stack compatibility, and privacy and security features that are required for complex edge deployments out of the box.
AI Edge Workloads
- Edge Video Transcoding: the process of converting a video stream or file from one format to another, to optimise for different devices, networks, or applications. As an example, a video captured by a drone camera may need to be transcoded to a lower resolution or bitrate to be streamed over a cellular network to a mobile device.
- Edge Inference: running a trained AI or ML model on an edge device to make predictions or decisions based on the input data. A smart camera may use edge inference to detect faces, objects, or gestures in real time. Edge inference can improve the performance, privacy, and reliability of AI applications, as the data does not need to be sent to the cloud for processing.
- Edge Training: updating or improving an AI or ML model on an edge device using the local data to enhance the personalisation, accuracy, and adaptability of AI applications. The model can learn from the data that is most relevant to the user or the environment.
Systems for Edge AI Workloads
To run and manage edge AI workloads, developers need powerful and efficient hardware and software platforms that can support the high computational and storage demands of AI and ML.
2U Hyper-E: SYS-221HE-FTNR
This powerful and versatile server is ideal for large workloads such as 5G core and edge, inference and machine learning, and network function virtualization. This server is powered by the 4th Gen Intel® Xeon® Scalable processors, which support up to 128 cores and 256 threads, and feature high bandwidth memory (HBM) technology. Moreover, it supports up to 8TB of DDR5-4800MHz memory, which can handle large data sets and intensive workloads. Multiple PCIe 5.0 slots that can accommodate up to 4 double-width or 8 single-width GPU/accelerator cards. These cards can provide high performance for AI inference, machine learning, cloud computing, and other applications. With up to 2 AIOM networking slots compatible with OCP NIC 3.0, it offers flexible networking options.
With 8 hot-swap 2.5" NVMe/SATA/SAS drive bays, it can provide fast and reliable storage. The server also has 2 internal M.2 NVMe/SATA drive slots and supports RAID 0/1/5/10 via a storage add-on card, which can enhance data protection and performance. Lastly, this system supports IPMI 2.0 with virtual media and KVM-over-LAN, which can enable remote management and monitoring.
Short-Depth Multi-GPU Edge Server: SYS-111E-FWTR
1U rackmount system that supports the 4th Gen Intel® Xeon® Scalable processors and up to 2TB of DDR5 ECC RDIMM memory. It is designed for IoT and edge computing applications, such as multi-access edge computing, artificial intelligence on edge, DU of 5G application, and satellite communication. It features two 10GbE ports, one dedicated IPMI port, and 800W redundant power supplies.
Compact System: SYS-E403-13E
Fanless and wallmount, this 1U rackmount system supports the 4th Gen Intel® Xeon® Scalable processors (Sapphire Rapids) and up to 2TB of DDR5 ECC RDIMM memory. It is designed for IoT and edge computing applications, such as multi-access edge computing, Flex-RAN, Open-RAN vBBU, artificial intelligence on edge, DU of 5G application, and satellite communication. It features two 10GbE ports, one dedicated IPMI port, and 800W redundant power supplies.
Some of the key benefits include:
- Delivers high performance and scalability with the latest Intel® Xeon® Scalable processors and PCIe 5.0 support
- Offers low latency and high bandwidth with dual 10GbE network connectivity and the M.2 NVMe slot.
- Provides enhanced security and reliability with the cryptographically signed firmware, secure boot, secure firmware updates, automatic firmware recovery, and system lockdown features
- Supports a wide range of edge AI workloads with the three PCIe 5.0 slots that can accommodate AI accelerator cards or add-on cards
Embedded System: SYS-E100-13AD
Known for its compact size, this fanless embedded system supports the 12th Generation Intel® Celeron® 7305E and Core™ i7-1265UE/i5-1245UE/i3-1215UE processors. It has two DDR5 SO-DIMM slots that can support up to 64GB of non-ECC memory. It also has three M.2 slots, one for storage, one for wireless, and one for AI accelerator cards. It has two 2.5 Gigabit Ethernet ports and one dedicated IPMI port. It comes with an 84W level power adapter.
Some of the key applications that SYS-E100-13AD-E can handle are industrial automation, retail, smart medical, expert systems, digital signage, kiosks, interactive info systems, IoT gateway for smart factories, smart buildings, security and surveillance. It can support four independent displays with HDMI and DP outputs. Additionally, it has a nano-SIM card slot for cellular connectivity. It has a wide operating temperature range of 0°C to 50°C and can withstand shock and vibration.
Compact Industrial GPU Workstation: DLAP-8000-Series
The DLAP-8000 Series is designed for industrial automation environments, with a robust and fanless chassis that can withstand shock and vibration. It also has a front-accessible I/O design that can optimise easy maintenance.
To support various edge AI workloads, this workstation is powered by the 9th Gen Intel® Xeon®/Core™ i7/i3 LGA processors with workstation C246 chipset. It also has dual SODIMMs for up to 64GB DDR4 / ECC options, which can handle large data sets and intensive workloads.
Able to deliver high-performance scalability, the DLAP-8000 Series can accommodate up to four hot-swappable 2.5" SATA 6 Gb/s trays with RAID 0/1/5/10 support, CFast, M.2 2280, and one mini-PCIe slot for flexible and reliable storage. It also has rich I/O ports, such as 2x DP++, 1x DVI-I, 3x GbE, 4x COM, 8-ch DI, 8-ch DO, TPM 2.0, 2x USB 3.1 Gen2, 1x USB 3.1 Gen1, 3x USB 2.0, and 2x USIM for easy connectivity and expansion. Moreover, it supports two FHFL PEG cards, which can provide high performance for AI inference, machine learning, cloud computing, and other applications.
Compact Industrial GPU Workstation: DLAP-4000 Series
A Compact industrial GPU workstation that can support various edge AI workloads. It is powered by the 8th/9th Gen Intel® Core™ i7/i5/i3 processors in LGA1151 Socket, able to deliver high performance and scalability. Additionally, it has dual SODIMMs for up to 32GB DDR4 non-ECC memory.
The DLAP-4000 Series supports one FHFL dual-width PEG card, which supports the NVIDIA Quadro® PEG card for graphics-intensive workloads. It also has rich I/O ports, such as 1x DVI, 1x HDMI, 1x DP (from CPU), 3x GbE, 4x COM, 8-ch DI, 8-ch DO, TPM 2.0, 2x USB 3.1 Gen2, 1x USB 3.1 Gen1, 3x USB 2.0, and 2x USIM for easy connectivity and expansion. Moreover, it has flexible and reliable storage options with 2 hot-swappable 2.5" SATA 6 Gb/s trays, 1 M.2 2280 drive, 1 CFast slot, and 1 Mini PCIe slot.
Edge Inference Platform: DLAP-201-JT2
The DLAP-201-JT2 is an edge AI inference platform that uses the NVIDIA® Jetson™ TX2 module to accelerate deep learning applications. It is a compact and fanless system that can operate in a wide temperature range from -20°C to 70°C. It is suitable for various edge AI scenarios, such as smart cities, smart factories, smart retail, and smart healthcare.
The DLAP-201-JT2 has rich I/O ports and expansion slots, such as HDMI, DP, DVI, GbE, COM, USB, USIM, mini PCIe, M.2, and CFast. It also supports one FHFL dual-width PEG card, which can support the NVIDIA Quadro® PEG card for graphics-intensive workloads. It can provide high performance for AI inference, machine learning, cloud computing, and other applications. The DLAP-201-JT2 is part of the ADLINK NVIDIA Jetson Edge AI Platform product line.
Key Technologies
- CPU or GPU-based AI edge Inferencing, GPU-based AI edge training, and video transcoding/encoding/decoding
- NVIDIA L4, L40S, L40, A30, A40, T4, A2 GPUs
- Short-depth chassis design for edge locations with AC or DC power supply options
- Front I/O with broad range of expansion and I/O port for flexibility and serviceability
- Ruggedized systems designed to be placed outside of the data centre
Ready to get started?
If you are ready to take your Edge AI projects to the next level with DiGiCOR Solutions, don’t hesitate to contact us today. We are here to help you find the best solution for your needs and budget.
Don’t settle for less. Choose DiGiCOR Solutions for Edge AI today.