Proving the Business Case for the Internet of Things

Intel and Philips test AI on healthcare imaging

Steve Rogerson
August 21, 2018

Intel and Philips have tested two healthcare use cases for deep learning inference models and AI: one on x-rays of bones for bone-age-prediction modelling, the other on CT scans of lungs for lung segmentation.
Using Intel Xeon scalable processors and the OpenVino toolkit, the two companies achieved a speed improvement of 188 times for the bone-age-prediction model, and a 38 times speed improvement for the lung-segmentation model over the baseline measurements.
“Intel Xeon scalable processors appear to be right for this type of AI workload,” said Vijayananda J, chief architect at Philips HealthSuite Insights. “Our customers can use their existing hardware to its maximum potential, while still aiming to achieve quality output resolution at exceptional speeds.”
Until recently, there was one prominent hardware method to accelerate deep learning: graphics processing units (GPUs). By design, GPUs work well with images, but they also have inherent memory constraints that data scientists have had to work around when building some models.
Central processing units (CPUs) – in this case Xeon scalable processors – don’t have those same memory constraints and can accelerate complex, hybrid workloads, including larger, memory-intensive models typically found in medical imaging. For a large subset of artificial intelligence (AI) workloads, the Xeon processors can better meet data scientists’ needs than GPU-based systems. As Philips found in the two recent tests, this enables the company to offer AI at lower cost to its customers.
AI techniques such as object detection and segmentation can help radiologists identify issues faster and more accurately, which can translate to better prioritisation of cases, better outcomes for more patients and reduced costs for hospitals.
Deep learning inference applications typically process workloads in small batches or in a streaming manner, which means they do not exhibit large batch sizes. CPUs are a fit for low batch or streaming applications. In particular, Xeon scalable processors are said to provide an affordable, flexible platform for AI models, particularly in conjunction with tools such as the OpenVino toolkit, which can help deploy pre-trained models for efficiency, without sacrificing accuracy.
These tests show that healthcare organisations can implement AI workloads without expensive hardware investments.
The results for both use cases surpassed expectations. The bone-age-prediction model went from an initial baseline test result of 1.42 images per second to a final tested rate of 267.1 images per second after optimisations, an increase of 188 times. The lung-segmentation model far surpassed the target of 15 images per second by improving from a baseline of 1.9 images per second to 71.7 images per second after optimisations.
Running healthcare deep learning workloads on CPU-based devices can provide direct benefits to companies such as Philips because it allows them to offer AI-based services that don’t drive up costs for their end customers, says Intel. As shown in this test, companies such as Philips can offer AI algorithms for download through an online store as a way to increase revenue and differentiate themselves from growing competition.
Multiple trends are contributing to this shift. As medical image resolution improves, medical image file sizes are growing; many images are 1GB or greater. More healthcare organisations are using deep learning inference to review patient images more quickly and accurately. Organisations are looking for ways to do this without buying expensive infrastructure.